Sep 13 00:42:36.011851 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:42:36.011867 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:42:36.011874 kernel: BIOS-provided physical RAM map: Sep 13 00:42:36.011878 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 13 00:42:36.011882 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 13 00:42:36.011886 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 13 00:42:36.011891 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 13 00:42:36.011895 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 13 00:42:36.011899 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Sep 13 00:42:36.011903 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Sep 13 00:42:36.011907 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Sep 13 00:42:36.011912 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Sep 13 00:42:36.011916 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Sep 13 00:42:36.011921 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Sep 13 00:42:36.011926 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Sep 13 00:42:36.011931 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Sep 13 00:42:36.011936 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Sep 13 00:42:36.011941 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Sep 13 00:42:36.011946 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 13 00:42:36.011950 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 13 00:42:36.011955 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 13 00:42:36.011960 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 13 00:42:36.011964 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 13 00:42:36.011969 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Sep 13 00:42:36.011973 kernel: NX (Execute Disable) protection: active Sep 13 00:42:36.011978 kernel: APIC: Static calls initialized Sep 13 00:42:36.011983 kernel: SMBIOS 3.2.1 present. Sep 13 00:42:36.011988 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Sep 13 00:42:36.011993 kernel: tsc: Detected 3400.000 MHz processor Sep 13 00:42:36.011998 kernel: tsc: Detected 3399.906 MHz TSC Sep 13 00:42:36.012003 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:42:36.012008 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:42:36.012013 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Sep 13 00:42:36.012018 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Sep 13 00:42:36.012023 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:42:36.012027 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Sep 13 00:42:36.012032 kernel: Using GB pages for direct mapping Sep 13 00:42:36.012038 kernel: ACPI: Early table checksum verification disabled Sep 13 00:42:36.012043 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 13 00:42:36.012048 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 13 00:42:36.012055 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Sep 13 00:42:36.012060 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 13 00:42:36.012065 kernel: ACPI: FACS 0x000000008C66CF80 000040 Sep 13 00:42:36.012070 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Sep 13 00:42:36.012076 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Sep 13 00:42:36.012081 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 13 00:42:36.012086 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 13 00:42:36.012091 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 13 00:42:36.012096 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 13 00:42:36.012101 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 13 00:42:36.012107 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 13 00:42:36.012113 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:42:36.012118 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 13 00:42:36.012123 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 13 00:42:36.012128 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:42:36.012133 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:42:36.012138 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 13 00:42:36.012143 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 13 00:42:36.012148 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:42:36.012153 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:42:36.012159 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 13 00:42:36.012164 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:42:36.012169 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 13 00:42:36.012175 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 13 00:42:36.012180 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 13 00:42:36.012185 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Sep 13 00:42:36.012190 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 13 00:42:36.012195 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 13 00:42:36.012201 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 13 00:42:36.012206 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 13 00:42:36.012211 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 13 00:42:36.012216 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Sep 13 00:42:36.012221 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Sep 13 00:42:36.012226 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Sep 13 00:42:36.012231 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Sep 13 00:42:36.012236 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Sep 13 00:42:36.012241 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Sep 13 00:42:36.012247 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Sep 13 00:42:36.012252 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Sep 13 00:42:36.012257 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Sep 13 00:42:36.012262 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Sep 13 00:42:36.012267 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Sep 13 00:42:36.012272 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Sep 13 00:42:36.012277 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Sep 13 00:42:36.012282 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Sep 13 00:42:36.012287 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Sep 13 00:42:36.012293 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Sep 13 00:42:36.012298 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Sep 13 00:42:36.012303 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Sep 13 00:42:36.012308 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Sep 13 00:42:36.012313 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Sep 13 00:42:36.012319 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Sep 13 00:42:36.012324 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Sep 13 00:42:36.012329 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Sep 13 00:42:36.012334 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Sep 13 00:42:36.012340 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Sep 13 00:42:36.012345 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Sep 13 00:42:36.012350 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Sep 13 00:42:36.012355 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Sep 13 00:42:36.012360 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Sep 13 00:42:36.012365 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Sep 13 00:42:36.012370 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Sep 13 00:42:36.012375 kernel: No NUMA configuration found Sep 13 00:42:36.012380 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Sep 13 00:42:36.012385 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Sep 13 00:42:36.012391 kernel: Zone ranges: Sep 13 00:42:36.012396 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:42:36.012402 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 00:42:36.012407 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Sep 13 00:42:36.012412 kernel: Movable zone start for each node Sep 13 00:42:36.012417 kernel: Early memory node ranges Sep 13 00:42:36.012422 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 13 00:42:36.012427 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 13 00:42:36.012432 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Sep 13 00:42:36.012438 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Sep 13 00:42:36.012443 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Sep 13 00:42:36.012448 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Sep 13 00:42:36.012454 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Sep 13 00:42:36.012466 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Sep 13 00:42:36.012472 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:42:36.012478 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 13 00:42:36.012483 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 13 00:42:36.012490 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 13 00:42:36.012495 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Sep 13 00:42:36.012500 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Sep 13 00:42:36.012506 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Sep 13 00:42:36.012511 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Sep 13 00:42:36.012517 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 13 00:42:36.012522 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 13 00:42:36.012528 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 13 00:42:36.012533 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 13 00:42:36.012539 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 13 00:42:36.012545 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 13 00:42:36.012550 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 13 00:42:36.012555 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 13 00:42:36.012561 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 13 00:42:36.012566 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 13 00:42:36.012571 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 13 00:42:36.012577 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 13 00:42:36.012582 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 13 00:42:36.012589 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 13 00:42:36.012594 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 13 00:42:36.012599 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 13 00:42:36.012605 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 13 00:42:36.012610 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 13 00:42:36.012616 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:42:36.012621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:42:36.012626 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:42:36.012632 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:42:36.012638 kernel: TSC deadline timer available Sep 13 00:42:36.012644 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 13 00:42:36.012649 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Sep 13 00:42:36.012655 kernel: Booting paravirtualized kernel on bare hardware Sep 13 00:42:36.012660 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:42:36.012666 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 13 00:42:36.012671 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 13 00:42:36.012677 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 13 00:42:36.012682 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 13 00:42:36.012689 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:42:36.012695 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:42:36.012700 kernel: random: crng init done Sep 13 00:42:36.012706 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 13 00:42:36.012711 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 13 00:42:36.012717 kernel: Fallback order for Node 0: 0 Sep 13 00:42:36.012722 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Sep 13 00:42:36.012727 kernel: Policy zone: Normal Sep 13 00:42:36.012734 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:42:36.012739 kernel: software IO TLB: area num 16. Sep 13 00:42:36.012745 kernel: Memory: 32720312K/33452980K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 732408K reserved, 0K cma-reserved) Sep 13 00:42:36.012750 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 13 00:42:36.012756 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:42:36.012761 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:42:36.012767 kernel: Dynamic Preempt: voluntary Sep 13 00:42:36.012773 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:42:36.012778 kernel: rcu: RCU event tracing is enabled. Sep 13 00:42:36.012785 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 13 00:42:36.012791 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:42:36.012796 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:42:36.012802 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:42:36.012807 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:42:36.012812 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 13 00:42:36.012818 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 13 00:42:36.012823 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:42:36.012829 kernel: Console: colour dummy device 80x25 Sep 13 00:42:36.012834 kernel: printk: console [tty0] enabled Sep 13 00:42:36.012841 kernel: printk: console [ttyS1] enabled Sep 13 00:42:36.012846 kernel: ACPI: Core revision 20230628 Sep 13 00:42:36.012852 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Sep 13 00:42:36.012857 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:42:36.012863 kernel: DMAR: Host address width 39 Sep 13 00:42:36.012868 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 13 00:42:36.012874 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 13 00:42:36.012879 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Sep 13 00:42:36.012885 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Sep 13 00:42:36.012891 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 13 00:42:36.012897 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 13 00:42:36.012902 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 13 00:42:36.012908 kernel: x2apic enabled Sep 13 00:42:36.012913 kernel: APIC: Switched APIC routing to: cluster x2apic Sep 13 00:42:36.012919 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 13 00:42:36.012924 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 13 00:42:36.012930 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 13 00:42:36.012935 kernel: process: using mwait in idle threads Sep 13 00:42:36.012941 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:42:36.012947 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:42:36.012952 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:42:36.012957 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 13 00:42:36.012963 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 13 00:42:36.012968 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 13 00:42:36.012973 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 13 00:42:36.012979 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 13 00:42:36.012984 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:42:36.012990 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:42:36.012995 kernel: TAA: Mitigation: TSX disabled Sep 13 00:42:36.013001 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 13 00:42:36.013007 kernel: SRBDS: Mitigation: Microcode Sep 13 00:42:36.013012 kernel: GDS: Mitigation: Microcode Sep 13 00:42:36.013018 kernel: active return thunk: its_return_thunk Sep 13 00:42:36.013023 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:42:36.013028 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Sep 13 00:42:36.013034 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:42:36.013039 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:42:36.013044 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:42:36.013050 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 13 00:42:36.013055 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 13 00:42:36.013061 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:42:36.013067 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 13 00:42:36.013072 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 13 00:42:36.013078 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 13 00:42:36.013083 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:42:36.013088 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:42:36.013094 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:42:36.013100 kernel: landlock: Up and running. Sep 13 00:42:36.013105 kernel: SELinux: Initializing. Sep 13 00:42:36.013110 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:42:36.013116 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:42:36.013121 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 13 00:42:36.013128 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 00:42:36.013133 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 00:42:36.013139 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 00:42:36.013144 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 13 00:42:36.013150 kernel: ... version: 4 Sep 13 00:42:36.013155 kernel: ... bit width: 48 Sep 13 00:42:36.013161 kernel: ... generic registers: 4 Sep 13 00:42:36.013166 kernel: ... value mask: 0000ffffffffffff Sep 13 00:42:36.013173 kernel: ... max period: 00007fffffffffff Sep 13 00:42:36.013178 kernel: ... fixed-purpose events: 3 Sep 13 00:42:36.013183 kernel: ... event mask: 000000070000000f Sep 13 00:42:36.013189 kernel: signal: max sigframe size: 2032 Sep 13 00:42:36.013194 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 13 00:42:36.013200 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:42:36.013205 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:42:36.013211 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 13 00:42:36.013216 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:42:36.013222 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:42:36.013228 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Sep 13 00:42:36.013234 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:42:36.013240 kernel: smp: Brought up 1 node, 16 CPUs Sep 13 00:42:36.013245 kernel: smpboot: Max logical packages: 1 Sep 13 00:42:36.013250 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 13 00:42:36.013256 kernel: devtmpfs: initialized Sep 13 00:42:36.013261 kernel: x86/mm: Memory block size: 128MB Sep 13 00:42:36.013267 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Sep 13 00:42:36.013273 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Sep 13 00:42:36.013279 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:42:36.013284 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 13 00:42:36.013290 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:42:36.013295 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:42:36.013301 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:42:36.013306 kernel: audit: type=2000 audit(1757724149.040:1): state=initialized audit_enabled=0 res=1 Sep 13 00:42:36.013311 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:42:36.013317 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:42:36.013323 kernel: cpuidle: using governor menu Sep 13 00:42:36.013328 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:42:36.013334 kernel: dca service started, version 1.12.1 Sep 13 00:42:36.013339 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 13 00:42:36.013345 kernel: PCI: Using configuration type 1 for base access Sep 13 00:42:36.013350 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 13 00:42:36.013355 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:42:36.013361 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:42:36.013366 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:42:36.013373 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:42:36.013378 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:42:36.013384 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:42:36.013389 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:42:36.013394 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:42:36.013400 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 13 00:42:36.013405 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:42:36.013411 kernel: ACPI: SSDT 0xFFFF8EA281B17800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Sep 13 00:42:36.013416 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:42:36.013422 kernel: ACPI: SSDT 0xFFFF8EA281B0D800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 13 00:42:36.013428 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:42:36.013433 kernel: ACPI: SSDT 0xFFFF8EA280246400 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 13 00:42:36.013439 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:42:36.013444 kernel: ACPI: SSDT 0xFFFF8EA281E78800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 13 00:42:36.013449 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:42:36.013455 kernel: ACPI: SSDT 0xFFFF8EA28012A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 13 00:42:36.013460 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:42:36.013468 kernel: ACPI: SSDT 0xFFFF8EA281B12400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Sep 13 00:42:36.013474 kernel: ACPI: _OSC evaluated successfully for all CPUs Sep 13 00:42:36.013480 kernel: ACPI: Interpreter enabled Sep 13 00:42:36.013486 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:42:36.013491 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:42:36.013496 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 13 00:42:36.013502 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 13 00:42:36.013507 kernel: HEST: Table parsing has been initialized. Sep 13 00:42:36.013513 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 13 00:42:36.013518 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:42:36.013524 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:42:36.013530 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 13 00:42:36.013535 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Sep 13 00:42:36.013541 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Sep 13 00:42:36.013546 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Sep 13 00:42:36.013552 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Sep 13 00:42:36.013558 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Sep 13 00:42:36.013563 kernel: ACPI: \_TZ_.FN00: New power resource Sep 13 00:42:36.013569 kernel: ACPI: \_TZ_.FN01: New power resource Sep 13 00:42:36.013574 kernel: ACPI: \_TZ_.FN02: New power resource Sep 13 00:42:36.013580 kernel: ACPI: \_TZ_.FN03: New power resource Sep 13 00:42:36.013586 kernel: ACPI: \_TZ_.FN04: New power resource Sep 13 00:42:36.013591 kernel: ACPI: \PIN_: New power resource Sep 13 00:42:36.013597 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 13 00:42:36.013669 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:42:36.013725 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 13 00:42:36.013775 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 13 00:42:36.013785 kernel: PCI host bridge to bus 0000:00 Sep 13 00:42:36.013839 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:42:36.013883 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:42:36.013928 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:42:36.013971 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Sep 13 00:42:36.014015 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 13 00:42:36.014058 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 13 00:42:36.014119 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 13 00:42:36.014177 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 13 00:42:36.014229 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.014283 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 13 00:42:36.014333 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Sep 13 00:42:36.014386 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 13 00:42:36.014439 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Sep 13 00:42:36.014497 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 13 00:42:36.014548 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Sep 13 00:42:36.014596 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 13 00:42:36.014650 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 13 00:42:36.014700 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Sep 13 00:42:36.014752 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Sep 13 00:42:36.014805 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 13 00:42:36.014855 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 00:42:36.014911 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 13 00:42:36.014960 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 00:42:36.015014 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 13 00:42:36.015065 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Sep 13 00:42:36.015117 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 13 00:42:36.015177 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 13 00:42:36.015230 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Sep 13 00:42:36.015279 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 13 00:42:36.015333 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 13 00:42:36.015384 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Sep 13 00:42:36.015436 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 13 00:42:36.015499 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 13 00:42:36.015551 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Sep 13 00:42:36.015599 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Sep 13 00:42:36.015648 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Sep 13 00:42:36.015697 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Sep 13 00:42:36.015746 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Sep 13 00:42:36.015799 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Sep 13 00:42:36.015848 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 13 00:42:36.015903 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 13 00:42:36.015953 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.016011 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 13 00:42:36.016064 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.016120 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 13 00:42:36.016170 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.016225 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 13 00:42:36.016275 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.016331 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Sep 13 00:42:36.016383 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.016437 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 13 00:42:36.016491 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 00:42:36.016546 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 13 00:42:36.016600 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 13 00:42:36.016653 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Sep 13 00:42:36.016704 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 13 00:42:36.016760 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 13 00:42:36.016810 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 13 00:42:36.016866 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Sep 13 00:42:36.016918 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 13 00:42:36.016972 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Sep 13 00:42:36.017023 kernel: pci 0000:01:00.0: PME# supported from D3cold Sep 13 00:42:36.017074 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 13 00:42:36.017126 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 13 00:42:36.017183 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Sep 13 00:42:36.017235 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 13 00:42:36.017286 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Sep 13 00:42:36.017340 kernel: pci 0000:01:00.1: PME# supported from D3cold Sep 13 00:42:36.017391 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 13 00:42:36.017443 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 13 00:42:36.017499 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 00:42:36.017549 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 13 00:42:36.017600 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 00:42:36.017651 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 13 00:42:36.017707 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Sep 13 00:42:36.017762 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Sep 13 00:42:36.017813 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Sep 13 00:42:36.017864 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Sep 13 00:42:36.017914 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Sep 13 00:42:36.017966 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.018015 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 13 00:42:36.018066 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 13 00:42:36.018120 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 13 00:42:36.018175 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 13 00:42:36.018227 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 13 00:42:36.018278 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Sep 13 00:42:36.018329 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Sep 13 00:42:36.018380 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Sep 13 00:42:36.018431 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.018487 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 13 00:42:36.018541 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 13 00:42:36.018590 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 13 00:42:36.018640 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 13 00:42:36.018697 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Sep 13 00:42:36.018747 kernel: pci 0000:06:00.0: enabling Extended Tags Sep 13 00:42:36.018799 kernel: pci 0000:06:00.0: supports D1 D2 Sep 13 00:42:36.018850 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:42:36.018904 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 13 00:42:36.018953 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 13 00:42:36.019004 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 13 00:42:36.019059 kernel: pci_bus 0000:07: extended config space not accessible Sep 13 00:42:36.019118 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Sep 13 00:42:36.019172 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Sep 13 00:42:36.019225 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Sep 13 00:42:36.019281 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Sep 13 00:42:36.019334 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:42:36.019389 kernel: pci 0000:07:00.0: supports D1 D2 Sep 13 00:42:36.019443 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:42:36.019504 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 13 00:42:36.019567 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 13 00:42:36.019629 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 13 00:42:36.019639 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 13 00:42:36.019649 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 13 00:42:36.019657 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 13 00:42:36.019664 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 13 00:42:36.019671 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 13 00:42:36.019680 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 13 00:42:36.019687 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 13 00:42:36.019694 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 13 00:42:36.019702 kernel: iommu: Default domain type: Translated Sep 13 00:42:36.019709 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:42:36.019717 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:42:36.019725 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:42:36.019732 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 13 00:42:36.019739 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Sep 13 00:42:36.019746 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Sep 13 00:42:36.019753 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Sep 13 00:42:36.019760 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Sep 13 00:42:36.019767 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Sep 13 00:42:36.019830 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Sep 13 00:42:36.019896 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Sep 13 00:42:36.019961 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:42:36.019972 kernel: vgaarb: loaded Sep 13 00:42:36.019980 kernel: clocksource: Switched to clocksource tsc-early Sep 13 00:42:36.019987 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:42:36.019995 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:42:36.020002 kernel: pnp: PnP ACPI init Sep 13 00:42:36.020063 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 13 00:42:36.020125 kernel: pnp 00:02: [dma 0 disabled] Sep 13 00:42:36.020186 kernel: pnp 00:03: [dma 0 disabled] Sep 13 00:42:36.020248 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 13 00:42:36.020304 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 13 00:42:36.020363 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Sep 13 00:42:36.020419 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Sep 13 00:42:36.020479 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Sep 13 00:42:36.020535 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Sep 13 00:42:36.020589 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 13 00:42:36.020648 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 13 00:42:36.020704 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 13 00:42:36.020760 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 13 00:42:36.020819 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Sep 13 00:42:36.020878 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 13 00:42:36.020933 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 13 00:42:36.020988 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 13 00:42:36.021043 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 13 00:42:36.021097 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 13 00:42:36.021153 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Sep 13 00:42:36.021211 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Sep 13 00:42:36.021224 kernel: pnp: PnP ACPI: found 9 devices Sep 13 00:42:36.021231 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:42:36.021239 kernel: NET: Registered PF_INET protocol family Sep 13 00:42:36.021246 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:42:36.021254 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 00:42:36.021261 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:42:36.021268 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:42:36.021276 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 13 00:42:36.021284 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 13 00:42:36.021292 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:42:36.021299 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:42:36.021306 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:42:36.021313 kernel: NET: Registered PF_XDP protocol family Sep 13 00:42:36.021374 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Sep 13 00:42:36.021435 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Sep 13 00:42:36.021499 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Sep 13 00:42:36.021563 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 13 00:42:36.021628 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 13 00:42:36.021691 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 13 00:42:36.021753 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 13 00:42:36.021814 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 00:42:36.021875 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 13 00:42:36.021935 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 00:42:36.021995 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 13 00:42:36.022059 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 13 00:42:36.022121 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 13 00:42:36.022182 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 13 00:42:36.022241 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 13 00:42:36.022301 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 13 00:42:36.022363 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 13 00:42:36.022424 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 13 00:42:36.022488 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 13 00:42:36.022551 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 13 00:42:36.022614 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 13 00:42:36.022675 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 13 00:42:36.022736 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 13 00:42:36.022795 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 13 00:42:36.022851 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 13 00:42:36.022908 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:42:36.022963 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:42:36.023016 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:42:36.023069 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Sep 13 00:42:36.023122 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 13 00:42:36.023183 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Sep 13 00:42:36.023239 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 00:42:36.023303 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Sep 13 00:42:36.023359 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Sep 13 00:42:36.023422 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 13 00:42:36.023484 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Sep 13 00:42:36.023536 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Sep 13 00:42:36.023582 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Sep 13 00:42:36.023632 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 13 00:42:36.023683 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Sep 13 00:42:36.023691 kernel: PCI: CLS 64 bytes, default 64 Sep 13 00:42:36.023697 kernel: DMAR: No ATSR found Sep 13 00:42:36.023703 kernel: DMAR: No SATC found Sep 13 00:42:36.023709 kernel: DMAR: dmar0: Using Queued invalidation Sep 13 00:42:36.023759 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 13 00:42:36.023810 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 13 00:42:36.023860 kernel: pci 0000:00:08.0: Adding to iommu group 2 Sep 13 00:42:36.023914 kernel: pci 0000:00:12.0: Adding to iommu group 3 Sep 13 00:42:36.023964 kernel: pci 0000:00:14.0: Adding to iommu group 4 Sep 13 00:42:36.024014 kernel: pci 0000:00:14.2: Adding to iommu group 4 Sep 13 00:42:36.024063 kernel: pci 0000:00:15.0: Adding to iommu group 5 Sep 13 00:42:36.024112 kernel: pci 0000:00:15.1: Adding to iommu group 5 Sep 13 00:42:36.024162 kernel: pci 0000:00:16.0: Adding to iommu group 6 Sep 13 00:42:36.024212 kernel: pci 0000:00:16.1: Adding to iommu group 6 Sep 13 00:42:36.024262 kernel: pci 0000:00:16.4: Adding to iommu group 6 Sep 13 00:42:36.024314 kernel: pci 0000:00:17.0: Adding to iommu group 7 Sep 13 00:42:36.024363 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Sep 13 00:42:36.024413 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Sep 13 00:42:36.024466 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Sep 13 00:42:36.024517 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Sep 13 00:42:36.024568 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Sep 13 00:42:36.024616 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Sep 13 00:42:36.024667 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Sep 13 00:42:36.024719 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Sep 13 00:42:36.024769 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Sep 13 00:42:36.024819 kernel: pci 0000:01:00.0: Adding to iommu group 1 Sep 13 00:42:36.024871 kernel: pci 0000:01:00.1: Adding to iommu group 1 Sep 13 00:42:36.024923 kernel: pci 0000:03:00.0: Adding to iommu group 15 Sep 13 00:42:36.024974 kernel: pci 0000:04:00.0: Adding to iommu group 16 Sep 13 00:42:36.025026 kernel: pci 0000:06:00.0: Adding to iommu group 17 Sep 13 00:42:36.025079 kernel: pci 0000:07:00.0: Adding to iommu group 17 Sep 13 00:42:36.025089 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 13 00:42:36.025096 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 00:42:36.025102 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Sep 13 00:42:36.025107 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Sep 13 00:42:36.025113 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 13 00:42:36.025119 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 13 00:42:36.025125 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 13 00:42:36.025177 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 13 00:42:36.025186 kernel: Initialise system trusted keyrings Sep 13 00:42:36.025194 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 13 00:42:36.025199 kernel: Key type asymmetric registered Sep 13 00:42:36.025205 kernel: Asymmetric key parser 'x509' registered Sep 13 00:42:36.025211 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:42:36.025217 kernel: io scheduler mq-deadline registered Sep 13 00:42:36.025222 kernel: io scheduler kyber registered Sep 13 00:42:36.025228 kernel: io scheduler bfq registered Sep 13 00:42:36.025277 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Sep 13 00:42:36.025328 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Sep 13 00:42:36.025380 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Sep 13 00:42:36.025429 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Sep 13 00:42:36.025484 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Sep 13 00:42:36.025535 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Sep 13 00:42:36.025589 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 13 00:42:36.025598 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 13 00:42:36.025604 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 13 00:42:36.025612 kernel: pstore: Using crash dump compression: deflate Sep 13 00:42:36.025617 kernel: pstore: Registered erst as persistent store backend Sep 13 00:42:36.025623 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:42:36.025629 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:42:36.025635 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:42:36.025641 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 00:42:36.025647 kernel: hpet_acpi_add: no address or irqs in _CRS Sep 13 00:42:36.025697 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 13 00:42:36.025708 kernel: i8042: PNP: No PS/2 controller found. Sep 13 00:42:36.025752 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 13 00:42:36.025799 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 13 00:42:36.025846 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-09-13T00:42:34 UTC (1757724154) Sep 13 00:42:36.025892 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 13 00:42:36.025900 kernel: intel_pstate: Intel P-state driver initializing Sep 13 00:42:36.025906 kernel: intel_pstate: Disabling energy efficiency optimization Sep 13 00:42:36.025912 kernel: intel_pstate: HWP enabled Sep 13 00:42:36.025920 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 13 00:42:36.025926 kernel: vesafb: scrolling: redraw Sep 13 00:42:36.025931 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 13 00:42:36.025937 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000005e393bb2, using 768k, total 768k Sep 13 00:42:36.025943 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:42:36.025949 kernel: fb0: VESA VGA frame buffer device Sep 13 00:42:36.025955 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:42:36.025960 kernel: Segment Routing with IPv6 Sep 13 00:42:36.025966 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:42:36.025972 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:42:36.025979 kernel: Key type dns_resolver registered Sep 13 00:42:36.025984 kernel: microcode: Current revision: 0x000000fc Sep 13 00:42:36.025990 kernel: microcode: Updated early from: 0x000000f4 Sep 13 00:42:36.025996 kernel: microcode: Microcode Update Driver: v2.2. Sep 13 00:42:36.026001 kernel: IPI shorthand broadcast: enabled Sep 13 00:42:36.026007 kernel: sched_clock: Marking stable (1572001033, 1379367810)->(4421608757, -1470239914) Sep 13 00:42:36.026013 kernel: registered taskstats version 1 Sep 13 00:42:36.026019 kernel: Loading compiled-in X.509 certificates Sep 13 00:42:36.026025 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:42:36.026031 kernel: Key type .fscrypt registered Sep 13 00:42:36.026037 kernel: Key type fscrypt-provisioning registered Sep 13 00:42:36.026043 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:42:36.026048 kernel: ima: No architecture policies found Sep 13 00:42:36.026054 kernel: clk: Disabling unused clocks Sep 13 00:42:36.026060 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:42:36.026066 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:42:36.026072 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:42:36.026078 kernel: Run /init as init process Sep 13 00:42:36.026084 kernel: with arguments: Sep 13 00:42:36.026090 kernel: /init Sep 13 00:42:36.026096 kernel: with environment: Sep 13 00:42:36.026101 kernel: HOME=/ Sep 13 00:42:36.026107 kernel: TERM=linux Sep 13 00:42:36.026113 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:42:36.026120 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:42:36.026128 systemd[1]: Detected architecture x86-64. Sep 13 00:42:36.026134 systemd[1]: Running in initrd. Sep 13 00:42:36.026140 systemd[1]: No hostname configured, using default hostname. Sep 13 00:42:36.026145 systemd[1]: Hostname set to . Sep 13 00:42:36.026151 systemd[1]: Initializing machine ID from random generator. Sep 13 00:42:36.026157 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:42:36.026163 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:42:36.026169 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:42:36.026177 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:42:36.026183 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:42:36.026189 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:42:36.026195 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:42:36.026201 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:42:36.026208 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Sep 13 00:42:36.026214 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:42:36.026220 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Sep 13 00:42:36.026226 kernel: clocksource: Switched to clocksource tsc Sep 13 00:42:36.026232 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:42:36.026238 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:42:36.026244 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:42:36.026250 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:42:36.026256 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:42:36.026262 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:42:36.026269 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:42:36.026275 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:42:36.026281 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:42:36.026287 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:42:36.026293 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:42:36.026299 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:42:36.026305 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:42:36.026311 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:42:36.026317 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:42:36.026324 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:42:36.026330 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:42:36.026336 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:42:36.026342 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:42:36.026358 systemd-journald[266]: Collecting audit messages is disabled. Sep 13 00:42:36.026374 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:42:36.026380 systemd-journald[266]: Journal started Sep 13 00:42:36.026393 systemd-journald[266]: Runtime Journal (/run/log/journal/e81ad24ecb5142c78b1fce2a2184a42f) is 8.0M, max 639.9M, 631.9M free. Sep 13 00:42:36.039972 systemd-modules-load[268]: Inserted module 'overlay' Sep 13 00:42:36.087113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:42:36.087150 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:42:36.130500 kernel: Bridge firewalling registered Sep 13 00:42:36.130518 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:42:36.148717 systemd-modules-load[268]: Inserted module 'br_netfilter' Sep 13 00:42:36.148987 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:42:36.149097 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:42:36.149186 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:42:36.149271 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:42:36.163882 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:42:36.248714 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:42:36.261170 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:42:36.275105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:42:36.308558 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:42:36.329137 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:42:36.351202 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:42:36.389760 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:42:36.402366 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:42:36.415150 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:42:36.421595 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:42:36.425112 systemd-resolved[295]: Positive Trust Anchors: Sep 13 00:42:36.425122 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:42:36.425172 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:42:36.427444 systemd-resolved[295]: Defaulting to hostname 'linux'. Sep 13 00:42:36.428186 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:42:36.455829 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:42:36.466724 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:42:36.507089 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:42:36.535899 dracut-cmdline[308]: dracut-dracut-053 Sep 13 00:42:36.543710 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:42:36.744467 kernel: SCSI subsystem initialized Sep 13 00:42:36.768493 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:42:36.791496 kernel: iscsi: registered transport (tcp) Sep 13 00:42:36.824347 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:42:36.824367 kernel: QLogic iSCSI HBA Driver Sep 13 00:42:36.857463 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:42:36.892717 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:42:36.953785 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:42:36.953806 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:42:36.973641 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:42:37.031534 kernel: raid6: avx2x4 gen() 52141 MB/s Sep 13 00:42:37.063533 kernel: raid6: avx2x2 gen() 52438 MB/s Sep 13 00:42:37.100616 kernel: raid6: avx2x1 gen() 43961 MB/s Sep 13 00:42:37.100635 kernel: raid6: using algorithm avx2x2 gen() 52438 MB/s Sep 13 00:42:37.148135 kernel: raid6: .... xor() 31144 MB/s, rmw enabled Sep 13 00:42:37.148154 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:42:37.189516 kernel: xor: automatically using best checksumming function avx Sep 13 00:42:37.303506 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:42:37.309374 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:42:37.337802 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:42:37.345096 systemd-udevd[493]: Using default interface naming scheme 'v255'. Sep 13 00:42:37.348594 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:42:37.383698 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:42:37.428474 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Sep 13 00:42:37.445672 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:42:37.470717 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:42:37.531618 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:42:37.564587 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:42:37.564643 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:42:37.591477 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:42:37.593713 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:42:37.618917 kernel: libata version 3.00 loaded. Sep 13 00:42:37.618932 kernel: ACPI: bus type USB registered Sep 13 00:42:37.618942 kernel: usbcore: registered new interface driver usbfs Sep 13 00:42:37.595101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:42:37.658874 kernel: usbcore: registered new interface driver hub Sep 13 00:42:37.658898 kernel: usbcore: registered new device driver usb Sep 13 00:42:37.595133 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:42:37.675418 kernel: PTP clock support registered Sep 13 00:42:37.672512 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:42:37.681397 kernel: ahci 0000:00:17.0: version 3.0 Sep 13 00:42:37.681506 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:42:37.681516 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Sep 13 00:42:37.719553 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 13 00:42:37.727559 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:42:37.746130 kernel: AES CTR mode by8 optimization enabled Sep 13 00:42:37.727603 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:42:38.141835 kernel: scsi host0: ahci Sep 13 00:42:38.141967 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 13 00:42:38.142059 kernel: scsi host1: ahci Sep 13 00:42:38.142144 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 13 00:42:38.142260 kernel: scsi host2: ahci Sep 13 00:42:38.142391 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 13 00:42:38.142525 kernel: scsi host3: ahci Sep 13 00:42:38.142653 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 13 00:42:38.142781 kernel: scsi host4: ahci Sep 13 00:42:38.142906 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 13 00:42:38.143033 kernel: scsi host5: ahci Sep 13 00:42:38.143208 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 13 00:42:38.143327 kernel: scsi host6: ahci Sep 13 00:42:38.143440 kernel: hub 1-0:1.0: USB hub found Sep 13 00:42:38.143584 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Sep 13 00:42:38.143602 kernel: hub 1-0:1.0: 16 ports detected Sep 13 00:42:38.143724 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Sep 13 00:42:38.143742 kernel: hub 2-0:1.0: USB hub found Sep 13 00:42:38.143855 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Sep 13 00:42:38.143869 kernel: hub 2-0:1.0: 10 ports detected Sep 13 00:42:38.143972 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Sep 13 00:42:38.143989 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Sep 13 00:42:38.144002 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 13 00:42:38.144026 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Sep 13 00:42:38.144039 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Sep 13 00:42:37.745579 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:42:38.203543 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 13 00:42:38.203558 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 13 00:42:38.132043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:42:38.351663 kernel: igb 0000:03:00.0: added PHC on eth0 Sep 13 00:42:38.352173 kernel: hub 1-14:1.0: USB hub found Sep 13 00:42:38.352615 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 13 00:42:38.352979 kernel: hub 1-14:1.0: 4 ports detected Sep 13 00:42:38.353330 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:9e Sep 13 00:42:38.353747 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Sep 13 00:42:38.354010 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 13 00:42:38.354306 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Sep 13 00:42:38.354583 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 13 00:42:38.141925 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:42:38.509392 kernel: igb 0000:04:00.0: added PHC on eth1 Sep 13 00:42:38.509632 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 13 00:42:38.509705 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:9f Sep 13 00:42:38.509771 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Sep 13 00:42:38.509835 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 13 00:42:38.509900 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:38.509908 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:38.509916 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:38.509923 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:38.509931 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 13 00:42:38.193989 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:42:38.730625 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 13 00:42:38.730705 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:38.730724 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 13 00:42:38.730844 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 13 00:42:38.730853 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 13 00:42:38.730863 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 13 00:42:38.730870 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 13 00:42:38.730944 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 13 00:42:38.730952 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Sep 13 00:42:38.731016 kernel: ata1.00: Features: NCQ-prio Sep 13 00:42:38.731024 kernel: ata2.00: Features: NCQ-prio Sep 13 00:42:38.731031 kernel: ata2.00: configured for UDMA/133 Sep 13 00:42:38.731038 kernel: ata1.00: configured for UDMA/133 Sep 13 00:42:38.731045 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 13 00:42:38.258560 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:42:38.749477 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 13 00:42:38.286586 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:42:38.608567 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:42:38.775467 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:42:38.775512 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Sep 13 00:42:38.820114 kernel: usbcore: registered new interface driver usbhid Sep 13 00:42:38.820132 kernel: usbhid: USB HID core driver Sep 13 00:42:38.823516 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 13 00:42:38.823617 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Sep 13 00:42:38.861590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:42:38.948578 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 13 00:42:38.948593 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Sep 13 00:42:38.948686 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 13 00:42:38.948757 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 13 00:42:38.948839 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 00:42:38.891741 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:42:39.470134 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 13 00:42:39.470152 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 13 00:42:39.470241 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:42:39.470250 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 13 00:42:39.470321 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Sep 13 00:42:39.470384 kernel: sd 1:0:0:0: [sdb] Write Protect is off Sep 13 00:42:39.470466 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 13 00:42:39.470552 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:42:39.470612 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Sep 13 00:42:39.470671 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:42:39.470679 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:42:39.470687 kernel: GPT:9289727 != 937703087 Sep 13 00:42:39.470696 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:42:39.470703 kernel: GPT:9289727 != 937703087 Sep 13 00:42:39.470709 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:42:39.470716 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:42:39.470723 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Sep 13 00:42:39.470783 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 13 00:42:39.470858 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 00:42:39.470917 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 13 00:42:39.470994 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:42:39.471056 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Sep 13 00:42:39.471129 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 13 00:42:39.471191 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:42:39.471249 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Sep 13 00:42:39.471309 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 00:42:39.471317 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 13 00:42:39.471381 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:42:39.471441 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Sep 13 00:42:39.476750 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:42:39.484467 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Sep 13 00:42:39.484581 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sdb3 scanned by (udev-worker) (542) Sep 13 00:42:39.498280 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Sep 13 00:42:39.580643 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (548) Sep 13 00:42:39.566842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:42:39.596180 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Sep 13 00:42:39.625895 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 13 00:42:39.636659 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 13 00:42:39.666003 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 13 00:42:39.709640 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:42:39.750601 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:42:39.750618 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:42:39.750630 disk-uuid[720]: Primary Header is updated. Sep 13 00:42:39.750630 disk-uuid[720]: Secondary Entries is updated. Sep 13 00:42:39.750630 disk-uuid[720]: Secondary Header is updated. Sep 13 00:42:39.807550 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:42:39.807563 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:42:39.807572 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:42:39.837467 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:42:40.814255 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:42:40.834300 disk-uuid[721]: The operation has completed successfully. Sep 13 00:42:40.842587 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:42:40.870131 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:42:40.870195 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:42:40.905727 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:42:40.932644 sh[738]: Success Sep 13 00:42:40.941559 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:42:40.990002 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:42:41.007804 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:42:41.018659 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:42:41.063569 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:42:41.063584 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:42:41.078970 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:42:41.097876 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:42:41.115474 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:42:41.154510 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:42:41.157166 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:42:41.165898 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:42:41.179727 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:42:41.283791 kernel: BTRFS info (device sdb6): first mount of filesystem e234c3ec-3f80-42e7-b5d0-d61480d74075 Sep 13 00:42:41.283804 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:42:41.283812 kernel: BTRFS info (device sdb6): using free space tree Sep 13 00:42:41.283820 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 13 00:42:41.283830 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 13 00:42:41.306506 kernel: BTRFS info (device sdb6): last unmount of filesystem e234c3ec-3f80-42e7-b5d0-d61480d74075 Sep 13 00:42:41.316955 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:42:41.329395 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:42:41.363969 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:42:41.404956 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:42:41.433659 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:42:41.445026 systemd-networkd[923]: lo: Link UP Sep 13 00:42:41.443014 ignition[819]: Ignition 2.19.0 Sep 13 00:42:41.445028 systemd-networkd[923]: lo: Gained carrier Sep 13 00:42:41.443018 ignition[819]: Stage: fetch-offline Sep 13 00:42:41.445319 unknown[819]: fetched base config from "system" Sep 13 00:42:41.443042 ignition[819]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:41.445338 unknown[819]: fetched user config from "system" Sep 13 00:42:41.443047 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:42:41.446791 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:42:41.443104 ignition[819]: parsed url from cmdline: "" Sep 13 00:42:41.447615 systemd-networkd[923]: Enumeration completed Sep 13 00:42:41.443106 ignition[819]: no config URL provided Sep 13 00:42:41.448800 systemd-networkd[923]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:42:41.443108 ignition[819]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:42:41.465963 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:42:41.443131 ignition[819]: parsing config with SHA512: 87119c394706ec875dde59d2a5253b5552ce4067b113cb732224f319925b871b69c946a4a409dc012147818c7af0452ee413a631529a9e9a4fe3f7b482322cf5 Sep 13 00:42:41.478452 systemd-networkd[923]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:42:41.445974 ignition[819]: fetch-offline: fetch-offline passed Sep 13 00:42:41.483048 systemd[1]: Reached target network.target - Network. Sep 13 00:42:41.445977 ignition[819]: POST message to Packet Timeline Sep 13 00:42:41.489877 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:42:41.445980 ignition[819]: POST Status error: resource requires networking Sep 13 00:42:41.509136 systemd-networkd[923]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:42:41.446020 ignition[819]: Ignition finished successfully Sep 13 00:42:41.703563 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Sep 13 00:42:41.512788 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:42:41.541793 ignition[937]: Ignition 2.19.0 Sep 13 00:42:41.700085 systemd-networkd[923]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:42:41.541806 ignition[937]: Stage: kargs Sep 13 00:42:41.542147 ignition[937]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:41.542170 ignition[937]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:42:41.543960 ignition[937]: kargs: kargs passed Sep 13 00:42:41.543969 ignition[937]: POST message to Packet Timeline Sep 13 00:42:41.543995 ignition[937]: GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:42:41.545541 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50975->[::1]:53: read: connection refused Sep 13 00:42:41.745747 ignition[937]: GET https://metadata.packet.net/metadata: attempt #2 Sep 13 00:42:41.746957 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33689->[::1]:53: read: connection refused Sep 13 00:42:41.906500 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Sep 13 00:42:41.909602 systemd-networkd[923]: eno1: Link UP Sep 13 00:42:41.909808 systemd-networkd[923]: eno2: Link UP Sep 13 00:42:41.910001 systemd-networkd[923]: enp1s0f0np0: Link UP Sep 13 00:42:41.910228 systemd-networkd[923]: enp1s0f0np0: Gained carrier Sep 13 00:42:41.927740 systemd-networkd[923]: enp1s0f1np1: Link UP Sep 13 00:42:41.965703 systemd-networkd[923]: enp1s0f0np0: DHCPv4 address 139.178.94.199/31, gateway 139.178.94.198 acquired from 145.40.83.140 Sep 13 00:42:42.147383 ignition[937]: GET https://metadata.packet.net/metadata: attempt #3 Sep 13 00:42:42.148486 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46539->[::1]:53: read: connection refused Sep 13 00:42:42.709322 systemd-networkd[923]: enp1s0f1np1: Gained carrier Sep 13 00:42:42.948936 ignition[937]: GET https://metadata.packet.net/metadata: attempt #4 Sep 13 00:42:42.950212 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35729->[::1]:53: read: connection refused Sep 13 00:42:43.669092 systemd-networkd[923]: enp1s0f0np0: Gained IPv6LL Sep 13 00:42:44.181114 systemd-networkd[923]: enp1s0f1np1: Gained IPv6LL Sep 13 00:42:44.551846 ignition[937]: GET https://metadata.packet.net/metadata: attempt #5 Sep 13 00:42:44.553044 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38371->[::1]:53: read: connection refused Sep 13 00:42:47.756619 ignition[937]: GET https://metadata.packet.net/metadata: attempt #6 Sep 13 00:42:48.825206 ignition[937]: GET result: OK Sep 13 00:42:49.262748 ignition[937]: Ignition finished successfully Sep 13 00:42:49.267774 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:42:49.296708 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:42:49.302971 ignition[955]: Ignition 2.19.0 Sep 13 00:42:49.302975 ignition[955]: Stage: disks Sep 13 00:42:49.303083 ignition[955]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:49.303090 ignition[955]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:42:49.303624 ignition[955]: disks: disks passed Sep 13 00:42:49.303627 ignition[955]: POST message to Packet Timeline Sep 13 00:42:49.303636 ignition[955]: GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:42:50.240387 ignition[955]: GET result: OK Sep 13 00:42:50.660333 ignition[955]: Ignition finished successfully Sep 13 00:42:50.663653 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:42:50.679711 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:42:50.697720 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:42:50.718751 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:42:50.740880 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:42:50.761876 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:42:50.791730 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:42:50.832083 systemd-fsck[971]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:42:50.841971 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:42:50.852696 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:42:50.972277 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:42:50.972466 kernel: EXT4-fs (sdb9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:42:50.986338 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:42:51.012670 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:42:51.021213 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:42:51.140434 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (980) Sep 13 00:42:51.140448 kernel: BTRFS info (device sdb6): first mount of filesystem e234c3ec-3f80-42e7-b5d0-d61480d74075 Sep 13 00:42:51.140456 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:42:51.140468 kernel: BTRFS info (device sdb6): using free space tree Sep 13 00:42:51.140515 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 13 00:42:51.140522 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 13 00:42:51.063104 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:42:51.154842 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Sep 13 00:42:51.177598 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:42:51.177619 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:42:51.238666 coreos-metadata[998]: Sep 13 00:42:51.215 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 00:42:51.201418 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:42:51.266635 coreos-metadata[982]: Sep 13 00:42:51.215 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 00:42:51.228816 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:42:51.263698 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:42:51.307591 initrd-setup-root[1012]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:42:51.317647 initrd-setup-root[1019]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:42:51.327556 initrd-setup-root[1026]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:42:51.337578 initrd-setup-root[1033]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:42:51.361369 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:42:51.384753 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:42:51.419701 kernel: BTRFS info (device sdb6): last unmount of filesystem e234c3ec-3f80-42e7-b5d0-d61480d74075 Sep 13 00:42:51.402967 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:42:51.430189 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:42:51.452756 ignition[1100]: INFO : Ignition 2.19.0 Sep 13 00:42:51.452756 ignition[1100]: INFO : Stage: mount Sep 13 00:42:51.460601 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:51.460601 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:42:51.460601 ignition[1100]: INFO : mount: mount passed Sep 13 00:42:51.460601 ignition[1100]: INFO : POST message to Packet Timeline Sep 13 00:42:51.460601 ignition[1100]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:42:51.459267 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:42:52.259626 coreos-metadata[982]: Sep 13 00:42:52.259 INFO Fetch successful Sep 13 00:42:52.271033 coreos-metadata[998]: Sep 13 00:42:52.270 INFO Fetch successful Sep 13 00:42:52.303469 coreos-metadata[982]: Sep 13 00:42:52.303 INFO wrote hostname ci-4081.3.5-n-2af8d06a22 to /sysroot/etc/hostname Sep 13 00:42:52.304623 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:42:52.330811 systemd[1]: flatcar-static-network.service: Deactivated successfully. Sep 13 00:42:52.330858 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Sep 13 00:42:52.409334 ignition[1100]: INFO : GET result: OK Sep 13 00:42:53.045920 ignition[1100]: INFO : Ignition finished successfully Sep 13 00:42:53.048916 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:42:53.080715 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:42:53.091784 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:42:53.146466 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by mount (1127) Sep 13 00:42:53.175411 kernel: BTRFS info (device sdb6): first mount of filesystem e234c3ec-3f80-42e7-b5d0-d61480d74075 Sep 13 00:42:53.175427 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:42:53.192687 kernel: BTRFS info (device sdb6): using free space tree Sep 13 00:42:53.229932 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 13 00:42:53.229949 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 13 00:42:53.243058 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:42:53.267982 ignition[1144]: INFO : Ignition 2.19.0 Sep 13 00:42:53.267982 ignition[1144]: INFO : Stage: files Sep 13 00:42:53.281745 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:53.281745 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:42:53.281745 ignition[1144]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:42:53.281745 ignition[1144]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:42:53.281745 ignition[1144]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:42:53.281745 ignition[1144]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:42:53.281745 ignition[1144]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:42:53.281745 ignition[1144]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:42:53.281745 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:42:53.281745 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:42:53.271903 unknown[1144]: wrote ssh authorized keys file for user: core Sep 13 00:42:53.415559 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:42:53.471999 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:42:54.015409 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:42:54.404255 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:42:54.404255 ignition[1144]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: files passed Sep 13 00:42:54.434908 ignition[1144]: INFO : POST message to Packet Timeline Sep 13 00:42:54.434908 ignition[1144]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:42:55.263501 ignition[1144]: INFO : GET result: OK Sep 13 00:42:55.687973 ignition[1144]: INFO : Ignition finished successfully Sep 13 00:42:55.690925 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:42:55.719780 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:42:55.720193 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:42:55.749006 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:42:55.749088 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:42:55.782858 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:42:55.798983 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:42:55.830717 initrd-setup-root-after-ignition[1184]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:42:55.830717 initrd-setup-root-after-ignition[1184]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:42:55.844687 initrd-setup-root-after-ignition[1189]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:42:55.839575 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:42:55.938699 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:42:55.938755 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:42:55.957868 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:42:55.978706 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:42:55.998951 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:42:56.012862 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:42:56.087743 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:42:56.113879 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:42:56.131426 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:42:56.135652 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:42:56.167781 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:42:56.185801 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:42:56.185956 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:42:56.214212 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:42:56.235031 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:42:56.252689 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:42:56.270746 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:42:56.291828 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:42:56.314070 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:42:56.335050 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:42:56.357118 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:42:56.379207 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:42:56.400182 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:42:56.420072 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:42:56.420509 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:42:56.446220 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:42:56.466105 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:42:56.487059 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:42:56.487521 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:42:56.509956 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:42:56.510352 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:42:56.542065 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:42:56.542547 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:42:56.562284 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:42:56.581941 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:42:56.582431 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:42:56.603090 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:42:56.622081 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:42:56.640048 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:42:56.640363 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:42:56.660082 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:42:56.660384 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:42:56.684134 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:42:56.684563 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:42:56.704156 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:42:56.805868 ignition[1211]: INFO : Ignition 2.19.0 Sep 13 00:42:56.805868 ignition[1211]: INFO : Stage: umount Sep 13 00:42:56.805868 ignition[1211]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:56.805868 ignition[1211]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:42:56.805868 ignition[1211]: INFO : umount: umount passed Sep 13 00:42:56.805868 ignition[1211]: INFO : POST message to Packet Timeline Sep 13 00:42:56.805868 ignition[1211]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:42:56.704563 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:42:56.722112 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:42:56.722515 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:42:56.752687 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:42:56.772597 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:42:56.772725 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:42:56.805789 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:42:56.820692 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:42:56.821111 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:42:56.827808 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:42:56.827950 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:42:56.878297 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:42:56.878675 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:42:56.878725 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:42:56.897679 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:42:56.897742 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:42:57.861459 ignition[1211]: INFO : GET result: OK Sep 13 00:42:58.686853 ignition[1211]: INFO : Ignition finished successfully Sep 13 00:42:58.690115 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:42:58.690406 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:42:58.707752 systemd[1]: Stopped target network.target - Network. Sep 13 00:42:58.724718 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:42:58.724891 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:42:58.743855 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:42:58.744020 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:42:58.763983 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:42:58.764141 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:42:58.782985 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:42:58.783150 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:42:58.801981 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:42:58.802152 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:42:58.821233 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:42:58.832612 systemd-networkd[923]: enp1s0f1np1: DHCPv6 lease lost Sep 13 00:42:58.840723 systemd-networkd[923]: enp1s0f0np0: DHCPv6 lease lost Sep 13 00:42:58.840956 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:42:58.859571 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:42:58.859861 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:42:58.878863 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:42:58.879241 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:42:58.900135 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:42:58.900255 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:42:58.931669 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:42:58.959614 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:42:58.959655 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:42:58.978702 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:42:58.978768 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:42:58.997849 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:42:58.998002 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:42:59.018811 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:42:59.018969 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:42:59.038101 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:42:59.060581 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:42:59.060960 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:42:59.092543 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:42:59.092686 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:42:59.098983 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:42:59.099095 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:42:59.126780 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:42:59.126938 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:42:59.157673 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:42:59.157845 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:42:59.186684 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:42:59.186861 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:42:59.230604 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:42:59.267603 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:42:59.487700 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). Sep 13 00:42:59.267638 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:42:59.286587 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:42:59.286630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:42:59.309410 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:42:59.309598 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:42:59.357114 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:42:59.357399 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:42:59.373582 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:42:59.407927 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:42:59.434800 systemd[1]: Switching root. Sep 13 00:42:59.581646 systemd-journald[266]: Journal stopped Sep 13 00:42:36.011851 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:42:36.011867 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:42:36.011874 kernel: BIOS-provided physical RAM map: Sep 13 00:42:36.011878 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 13 00:42:36.011882 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 13 00:42:36.011886 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 13 00:42:36.011891 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 13 00:42:36.011895 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 13 00:42:36.011899 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Sep 13 00:42:36.011903 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Sep 13 00:42:36.011907 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Sep 13 00:42:36.011912 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Sep 13 00:42:36.011916 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Sep 13 00:42:36.011921 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Sep 13 00:42:36.011926 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Sep 13 00:42:36.011931 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Sep 13 00:42:36.011936 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Sep 13 00:42:36.011941 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Sep 13 00:42:36.011946 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 13 00:42:36.011950 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 13 00:42:36.011955 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 13 00:42:36.011960 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 13 00:42:36.011964 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 13 00:42:36.011969 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Sep 13 00:42:36.011973 kernel: NX (Execute Disable) protection: active Sep 13 00:42:36.011978 kernel: APIC: Static calls initialized Sep 13 00:42:36.011983 kernel: SMBIOS 3.2.1 present. Sep 13 00:42:36.011988 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Sep 13 00:42:36.011993 kernel: tsc: Detected 3400.000 MHz processor Sep 13 00:42:36.011998 kernel: tsc: Detected 3399.906 MHz TSC Sep 13 00:42:36.012003 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:42:36.012008 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:42:36.012013 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Sep 13 00:42:36.012018 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Sep 13 00:42:36.012023 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:42:36.012027 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Sep 13 00:42:36.012032 kernel: Using GB pages for direct mapping Sep 13 00:42:36.012038 kernel: ACPI: Early table checksum verification disabled Sep 13 00:42:36.012043 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 13 00:42:36.012048 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 13 00:42:36.012055 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Sep 13 00:42:36.012060 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 13 00:42:36.012065 kernel: ACPI: FACS 0x000000008C66CF80 000040 Sep 13 00:42:36.012070 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Sep 13 00:42:36.012076 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Sep 13 00:42:36.012081 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 13 00:42:36.012086 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 13 00:42:36.012091 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 13 00:42:36.012096 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 13 00:42:36.012101 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 13 00:42:36.012107 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 13 00:42:36.012113 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:42:36.012118 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 13 00:42:36.012123 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 13 00:42:36.012128 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:42:36.012133 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:42:36.012138 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 13 00:42:36.012143 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 13 00:42:36.012148 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:42:36.012153 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 13 00:42:36.012159 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 13 00:42:36.012164 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Sep 13 00:42:36.012169 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 13 00:42:36.012175 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 13 00:42:36.012180 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 13 00:42:36.012185 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Sep 13 00:42:36.012190 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 13 00:42:36.012195 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 13 00:42:36.012201 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 13 00:42:36.012206 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 13 00:42:36.012211 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 13 00:42:36.012216 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Sep 13 00:42:36.012221 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Sep 13 00:42:36.012226 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Sep 13 00:42:36.012231 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Sep 13 00:42:36.012236 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Sep 13 00:42:36.012241 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Sep 13 00:42:36.012247 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Sep 13 00:42:36.012252 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Sep 13 00:42:36.012257 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Sep 13 00:42:36.012262 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Sep 13 00:42:36.012267 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Sep 13 00:42:36.012272 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Sep 13 00:42:36.012277 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Sep 13 00:42:36.012282 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Sep 13 00:42:36.012287 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Sep 13 00:42:36.012293 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Sep 13 00:42:36.012298 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Sep 13 00:42:36.012303 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Sep 13 00:42:36.012308 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Sep 13 00:42:36.012313 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Sep 13 00:42:36.012319 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Sep 13 00:42:36.012324 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Sep 13 00:42:36.012329 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Sep 13 00:42:36.012334 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Sep 13 00:42:36.012340 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Sep 13 00:42:36.012345 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Sep 13 00:42:36.012350 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Sep 13 00:42:36.012355 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Sep 13 00:42:36.012360 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Sep 13 00:42:36.012365 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Sep 13 00:42:36.012370 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Sep 13 00:42:36.012375 kernel: No NUMA configuration found Sep 13 00:42:36.012380 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Sep 13 00:42:36.012385 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Sep 13 00:42:36.012391 kernel: Zone ranges: Sep 13 00:42:36.012396 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:42:36.012402 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 13 00:42:36.012407 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Sep 13 00:42:36.012412 kernel: Movable zone start for each node Sep 13 00:42:36.012417 kernel: Early memory node ranges Sep 13 00:42:36.012422 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 13 00:42:36.012427 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 13 00:42:36.012432 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Sep 13 00:42:36.012438 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Sep 13 00:42:36.012443 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Sep 13 00:42:36.012448 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Sep 13 00:42:36.012454 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Sep 13 00:42:36.012466 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Sep 13 00:42:36.012472 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:42:36.012478 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 13 00:42:36.012483 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 13 00:42:36.012490 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 13 00:42:36.012495 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Sep 13 00:42:36.012500 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Sep 13 00:42:36.012506 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Sep 13 00:42:36.012511 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Sep 13 00:42:36.012517 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 13 00:42:36.012522 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 13 00:42:36.012528 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 13 00:42:36.012533 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 13 00:42:36.012539 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 13 00:42:36.012545 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 13 00:42:36.012550 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 13 00:42:36.012555 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 13 00:42:36.012561 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 13 00:42:36.012566 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 13 00:42:36.012571 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 13 00:42:36.012577 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 13 00:42:36.012582 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 13 00:42:36.012589 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 13 00:42:36.012594 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 13 00:42:36.012599 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 13 00:42:36.012605 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 13 00:42:36.012610 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 13 00:42:36.012616 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:42:36.012621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:42:36.012626 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:42:36.012632 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:42:36.012638 kernel: TSC deadline timer available Sep 13 00:42:36.012644 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 13 00:42:36.012649 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Sep 13 00:42:36.012655 kernel: Booting paravirtualized kernel on bare hardware Sep 13 00:42:36.012660 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:42:36.012666 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 13 00:42:36.012671 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 13 00:42:36.012677 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 13 00:42:36.012682 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 13 00:42:36.012689 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:42:36.012695 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:42:36.012700 kernel: random: crng init done Sep 13 00:42:36.012706 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 13 00:42:36.012711 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 13 00:42:36.012717 kernel: Fallback order for Node 0: 0 Sep 13 00:42:36.012722 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Sep 13 00:42:36.012727 kernel: Policy zone: Normal Sep 13 00:42:36.012734 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:42:36.012739 kernel: software IO TLB: area num 16. Sep 13 00:42:36.012745 kernel: Memory: 32720312K/33452980K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 732408K reserved, 0K cma-reserved) Sep 13 00:42:36.012750 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 13 00:42:36.012756 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:42:36.012761 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:42:36.012767 kernel: Dynamic Preempt: voluntary Sep 13 00:42:36.012773 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:42:36.012778 kernel: rcu: RCU event tracing is enabled. Sep 13 00:42:36.012785 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 13 00:42:36.012791 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:42:36.012796 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:42:36.012802 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:42:36.012807 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:42:36.012812 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 13 00:42:36.012818 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 13 00:42:36.012823 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:42:36.012829 kernel: Console: colour dummy device 80x25 Sep 13 00:42:36.012834 kernel: printk: console [tty0] enabled Sep 13 00:42:36.012841 kernel: printk: console [ttyS1] enabled Sep 13 00:42:36.012846 kernel: ACPI: Core revision 20230628 Sep 13 00:42:36.012852 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Sep 13 00:42:36.012857 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:42:36.012863 kernel: DMAR: Host address width 39 Sep 13 00:42:36.012868 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 13 00:42:36.012874 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 13 00:42:36.012879 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Sep 13 00:42:36.012885 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Sep 13 00:42:36.012891 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 13 00:42:36.012897 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 13 00:42:36.012902 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 13 00:42:36.012908 kernel: x2apic enabled Sep 13 00:42:36.012913 kernel: APIC: Switched APIC routing to: cluster x2apic Sep 13 00:42:36.012919 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 13 00:42:36.012924 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 13 00:42:36.012930 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 13 00:42:36.012935 kernel: process: using mwait in idle threads Sep 13 00:42:36.012941 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 13 00:42:36.012947 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 13 00:42:36.012952 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:42:36.012957 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 13 00:42:36.012963 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 13 00:42:36.012968 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 13 00:42:36.012973 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 13 00:42:36.012979 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 13 00:42:36.012984 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:42:36.012990 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:42:36.012995 kernel: TAA: Mitigation: TSX disabled Sep 13 00:42:36.013001 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 13 00:42:36.013007 kernel: SRBDS: Mitigation: Microcode Sep 13 00:42:36.013012 kernel: GDS: Mitigation: Microcode Sep 13 00:42:36.013018 kernel: active return thunk: its_return_thunk Sep 13 00:42:36.013023 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:42:36.013028 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Sep 13 00:42:36.013034 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:42:36.013039 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:42:36.013044 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:42:36.013050 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 13 00:42:36.013055 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 13 00:42:36.013061 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:42:36.013067 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 13 00:42:36.013072 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 13 00:42:36.013078 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 13 00:42:36.013083 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:42:36.013088 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:42:36.013094 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:42:36.013100 kernel: landlock: Up and running. Sep 13 00:42:36.013105 kernel: SELinux: Initializing. Sep 13 00:42:36.013110 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:42:36.013116 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:42:36.013121 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 13 00:42:36.013128 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 00:42:36.013133 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 00:42:36.013139 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 00:42:36.013144 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 13 00:42:36.013150 kernel: ... version: 4 Sep 13 00:42:36.013155 kernel: ... bit width: 48 Sep 13 00:42:36.013161 kernel: ... generic registers: 4 Sep 13 00:42:36.013166 kernel: ... value mask: 0000ffffffffffff Sep 13 00:42:36.013173 kernel: ... max period: 00007fffffffffff Sep 13 00:42:36.013178 kernel: ... fixed-purpose events: 3 Sep 13 00:42:36.013183 kernel: ... event mask: 000000070000000f Sep 13 00:42:36.013189 kernel: signal: max sigframe size: 2032 Sep 13 00:42:36.013194 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 13 00:42:36.013200 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:42:36.013205 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:42:36.013211 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 13 00:42:36.013216 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:42:36.013222 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:42:36.013228 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Sep 13 00:42:36.013234 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 13 00:42:36.013240 kernel: smp: Brought up 1 node, 16 CPUs Sep 13 00:42:36.013245 kernel: smpboot: Max logical packages: 1 Sep 13 00:42:36.013250 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 13 00:42:36.013256 kernel: devtmpfs: initialized Sep 13 00:42:36.013261 kernel: x86/mm: Memory block size: 128MB Sep 13 00:42:36.013267 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Sep 13 00:42:36.013273 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Sep 13 00:42:36.013279 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:42:36.013284 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 13 00:42:36.013290 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:42:36.013295 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:42:36.013301 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:42:36.013306 kernel: audit: type=2000 audit(1757724149.040:1): state=initialized audit_enabled=0 res=1 Sep 13 00:42:36.013311 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:42:36.013317 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:42:36.013323 kernel: cpuidle: using governor menu Sep 13 00:42:36.013328 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:42:36.013334 kernel: dca service started, version 1.12.1 Sep 13 00:42:36.013339 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 13 00:42:36.013345 kernel: PCI: Using configuration type 1 for base access Sep 13 00:42:36.013350 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 13 00:42:36.013355 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:42:36.013361 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:42:36.013366 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:42:36.013373 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:42:36.013378 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:42:36.013384 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:42:36.013389 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:42:36.013394 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:42:36.013400 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 13 00:42:36.013405 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:42:36.013411 kernel: ACPI: SSDT 0xFFFF8EA281B17800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Sep 13 00:42:36.013416 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:42:36.013422 kernel: ACPI: SSDT 0xFFFF8EA281B0D800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 13 00:42:36.013428 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:42:36.013433 kernel: ACPI: SSDT 0xFFFF8EA280246400 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 13 00:42:36.013439 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:42:36.013444 kernel: ACPI: SSDT 0xFFFF8EA281E78800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 13 00:42:36.013449 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:42:36.013455 kernel: ACPI: SSDT 0xFFFF8EA28012A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 13 00:42:36.013460 kernel: ACPI: Dynamic OEM Table Load: Sep 13 00:42:36.013468 kernel: ACPI: SSDT 0xFFFF8EA281B12400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Sep 13 00:42:36.013474 kernel: ACPI: _OSC evaluated successfully for all CPUs Sep 13 00:42:36.013480 kernel: ACPI: Interpreter enabled Sep 13 00:42:36.013486 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:42:36.013491 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:42:36.013496 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 13 00:42:36.013502 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 13 00:42:36.013507 kernel: HEST: Table parsing has been initialized. Sep 13 00:42:36.013513 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 13 00:42:36.013518 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:42:36.013524 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:42:36.013530 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 13 00:42:36.013535 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Sep 13 00:42:36.013541 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Sep 13 00:42:36.013546 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Sep 13 00:42:36.013552 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Sep 13 00:42:36.013558 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Sep 13 00:42:36.013563 kernel: ACPI: \_TZ_.FN00: New power resource Sep 13 00:42:36.013569 kernel: ACPI: \_TZ_.FN01: New power resource Sep 13 00:42:36.013574 kernel: ACPI: \_TZ_.FN02: New power resource Sep 13 00:42:36.013580 kernel: ACPI: \_TZ_.FN03: New power resource Sep 13 00:42:36.013586 kernel: ACPI: \_TZ_.FN04: New power resource Sep 13 00:42:36.013591 kernel: ACPI: \PIN_: New power resource Sep 13 00:42:36.013597 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 13 00:42:36.013669 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:42:36.013725 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 13 00:42:36.013775 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 13 00:42:36.013785 kernel: PCI host bridge to bus 0000:00 Sep 13 00:42:36.013839 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:42:36.013883 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:42:36.013928 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:42:36.013971 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Sep 13 00:42:36.014015 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 13 00:42:36.014058 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 13 00:42:36.014119 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 13 00:42:36.014177 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 13 00:42:36.014229 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.014283 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 13 00:42:36.014333 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Sep 13 00:42:36.014386 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 13 00:42:36.014439 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Sep 13 00:42:36.014497 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 13 00:42:36.014548 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Sep 13 00:42:36.014596 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 13 00:42:36.014650 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 13 00:42:36.014700 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Sep 13 00:42:36.014752 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Sep 13 00:42:36.014805 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 13 00:42:36.014855 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 00:42:36.014911 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 13 00:42:36.014960 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 00:42:36.015014 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 13 00:42:36.015065 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Sep 13 00:42:36.015117 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 13 00:42:36.015177 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 13 00:42:36.015230 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Sep 13 00:42:36.015279 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 13 00:42:36.015333 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 13 00:42:36.015384 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Sep 13 00:42:36.015436 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 13 00:42:36.015499 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 13 00:42:36.015551 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Sep 13 00:42:36.015599 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Sep 13 00:42:36.015648 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Sep 13 00:42:36.015697 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Sep 13 00:42:36.015746 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Sep 13 00:42:36.015799 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Sep 13 00:42:36.015848 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 13 00:42:36.015903 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 13 00:42:36.015953 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.016011 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 13 00:42:36.016064 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.016120 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 13 00:42:36.016170 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.016225 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 13 00:42:36.016275 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.016331 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Sep 13 00:42:36.016383 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.016437 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 13 00:42:36.016491 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 13 00:42:36.016546 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 13 00:42:36.016600 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 13 00:42:36.016653 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Sep 13 00:42:36.016704 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 13 00:42:36.016760 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 13 00:42:36.016810 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 13 00:42:36.016866 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Sep 13 00:42:36.016918 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 13 00:42:36.016972 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Sep 13 00:42:36.017023 kernel: pci 0000:01:00.0: PME# supported from D3cold Sep 13 00:42:36.017074 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 13 00:42:36.017126 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 13 00:42:36.017183 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Sep 13 00:42:36.017235 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 13 00:42:36.017286 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Sep 13 00:42:36.017340 kernel: pci 0000:01:00.1: PME# supported from D3cold Sep 13 00:42:36.017391 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 13 00:42:36.017443 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 13 00:42:36.017499 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 00:42:36.017549 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 13 00:42:36.017600 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 00:42:36.017651 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 13 00:42:36.017707 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Sep 13 00:42:36.017762 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Sep 13 00:42:36.017813 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Sep 13 00:42:36.017864 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Sep 13 00:42:36.017914 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Sep 13 00:42:36.017966 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.018015 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 13 00:42:36.018066 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 13 00:42:36.018120 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 13 00:42:36.018175 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 13 00:42:36.018227 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 13 00:42:36.018278 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Sep 13 00:42:36.018329 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Sep 13 00:42:36.018380 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Sep 13 00:42:36.018431 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 13 00:42:36.018487 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 13 00:42:36.018541 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 13 00:42:36.018590 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 13 00:42:36.018640 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 13 00:42:36.018697 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Sep 13 00:42:36.018747 kernel: pci 0000:06:00.0: enabling Extended Tags Sep 13 00:42:36.018799 kernel: pci 0000:06:00.0: supports D1 D2 Sep 13 00:42:36.018850 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:42:36.018904 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 13 00:42:36.018953 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 13 00:42:36.019004 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 13 00:42:36.019059 kernel: pci_bus 0000:07: extended config space not accessible Sep 13 00:42:36.019118 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Sep 13 00:42:36.019172 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Sep 13 00:42:36.019225 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Sep 13 00:42:36.019281 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Sep 13 00:42:36.019334 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:42:36.019389 kernel: pci 0000:07:00.0: supports D1 D2 Sep 13 00:42:36.019443 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 13 00:42:36.019504 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 13 00:42:36.019567 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 13 00:42:36.019629 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 13 00:42:36.019639 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 13 00:42:36.019649 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 13 00:42:36.019657 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 13 00:42:36.019664 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 13 00:42:36.019671 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 13 00:42:36.019680 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 13 00:42:36.019687 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 13 00:42:36.019694 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 13 00:42:36.019702 kernel: iommu: Default domain type: Translated Sep 13 00:42:36.019709 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:42:36.019717 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:42:36.019725 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:42:36.019732 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 13 00:42:36.019739 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Sep 13 00:42:36.019746 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Sep 13 00:42:36.019753 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Sep 13 00:42:36.019760 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Sep 13 00:42:36.019767 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Sep 13 00:42:36.019830 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Sep 13 00:42:36.019896 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Sep 13 00:42:36.019961 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:42:36.019972 kernel: vgaarb: loaded Sep 13 00:42:36.019980 kernel: clocksource: Switched to clocksource tsc-early Sep 13 00:42:36.019987 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:42:36.019995 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:42:36.020002 kernel: pnp: PnP ACPI init Sep 13 00:42:36.020063 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 13 00:42:36.020125 kernel: pnp 00:02: [dma 0 disabled] Sep 13 00:42:36.020186 kernel: pnp 00:03: [dma 0 disabled] Sep 13 00:42:36.020248 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 13 00:42:36.020304 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 13 00:42:36.020363 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Sep 13 00:42:36.020419 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Sep 13 00:42:36.020479 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Sep 13 00:42:36.020535 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Sep 13 00:42:36.020589 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 13 00:42:36.020648 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 13 00:42:36.020704 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 13 00:42:36.020760 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 13 00:42:36.020819 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Sep 13 00:42:36.020878 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 13 00:42:36.020933 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 13 00:42:36.020988 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 13 00:42:36.021043 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 13 00:42:36.021097 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 13 00:42:36.021153 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Sep 13 00:42:36.021211 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Sep 13 00:42:36.021224 kernel: pnp: PnP ACPI: found 9 devices Sep 13 00:42:36.021231 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:42:36.021239 kernel: NET: Registered PF_INET protocol family Sep 13 00:42:36.021246 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:42:36.021254 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 00:42:36.021261 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:42:36.021268 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:42:36.021276 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 13 00:42:36.021284 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 13 00:42:36.021292 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:42:36.021299 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:42:36.021306 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:42:36.021313 kernel: NET: Registered PF_XDP protocol family Sep 13 00:42:36.021374 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Sep 13 00:42:36.021435 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Sep 13 00:42:36.021499 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Sep 13 00:42:36.021563 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 13 00:42:36.021628 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 13 00:42:36.021691 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 13 00:42:36.021753 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 13 00:42:36.021814 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 13 00:42:36.021875 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 13 00:42:36.021935 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 00:42:36.021995 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 13 00:42:36.022059 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 13 00:42:36.022121 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 13 00:42:36.022182 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 13 00:42:36.022241 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 13 00:42:36.022301 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 13 00:42:36.022363 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 13 00:42:36.022424 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 13 00:42:36.022488 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 13 00:42:36.022551 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 13 00:42:36.022614 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 13 00:42:36.022675 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 13 00:42:36.022736 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 13 00:42:36.022795 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 13 00:42:36.022851 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 13 00:42:36.022908 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:42:36.022963 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:42:36.023016 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:42:36.023069 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Sep 13 00:42:36.023122 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 13 00:42:36.023183 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Sep 13 00:42:36.023239 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 13 00:42:36.023303 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Sep 13 00:42:36.023359 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Sep 13 00:42:36.023422 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 13 00:42:36.023484 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Sep 13 00:42:36.023536 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Sep 13 00:42:36.023582 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Sep 13 00:42:36.023632 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 13 00:42:36.023683 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Sep 13 00:42:36.023691 kernel: PCI: CLS 64 bytes, default 64 Sep 13 00:42:36.023697 kernel: DMAR: No ATSR found Sep 13 00:42:36.023703 kernel: DMAR: No SATC found Sep 13 00:42:36.023709 kernel: DMAR: dmar0: Using Queued invalidation Sep 13 00:42:36.023759 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 13 00:42:36.023810 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 13 00:42:36.023860 kernel: pci 0000:00:08.0: Adding to iommu group 2 Sep 13 00:42:36.023914 kernel: pci 0000:00:12.0: Adding to iommu group 3 Sep 13 00:42:36.023964 kernel: pci 0000:00:14.0: Adding to iommu group 4 Sep 13 00:42:36.024014 kernel: pci 0000:00:14.2: Adding to iommu group 4 Sep 13 00:42:36.024063 kernel: pci 0000:00:15.0: Adding to iommu group 5 Sep 13 00:42:36.024112 kernel: pci 0000:00:15.1: Adding to iommu group 5 Sep 13 00:42:36.024162 kernel: pci 0000:00:16.0: Adding to iommu group 6 Sep 13 00:42:36.024212 kernel: pci 0000:00:16.1: Adding to iommu group 6 Sep 13 00:42:36.024262 kernel: pci 0000:00:16.4: Adding to iommu group 6 Sep 13 00:42:36.024314 kernel: pci 0000:00:17.0: Adding to iommu group 7 Sep 13 00:42:36.024363 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Sep 13 00:42:36.024413 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Sep 13 00:42:36.024466 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Sep 13 00:42:36.024517 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Sep 13 00:42:36.024568 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Sep 13 00:42:36.024616 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Sep 13 00:42:36.024667 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Sep 13 00:42:36.024719 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Sep 13 00:42:36.024769 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Sep 13 00:42:36.024819 kernel: pci 0000:01:00.0: Adding to iommu group 1 Sep 13 00:42:36.024871 kernel: pci 0000:01:00.1: Adding to iommu group 1 Sep 13 00:42:36.024923 kernel: pci 0000:03:00.0: Adding to iommu group 15 Sep 13 00:42:36.024974 kernel: pci 0000:04:00.0: Adding to iommu group 16 Sep 13 00:42:36.025026 kernel: pci 0000:06:00.0: Adding to iommu group 17 Sep 13 00:42:36.025079 kernel: pci 0000:07:00.0: Adding to iommu group 17 Sep 13 00:42:36.025089 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 13 00:42:36.025096 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 00:42:36.025102 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Sep 13 00:42:36.025107 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Sep 13 00:42:36.025113 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 13 00:42:36.025119 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 13 00:42:36.025125 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 13 00:42:36.025177 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 13 00:42:36.025186 kernel: Initialise system trusted keyrings Sep 13 00:42:36.025194 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 13 00:42:36.025199 kernel: Key type asymmetric registered Sep 13 00:42:36.025205 kernel: Asymmetric key parser 'x509' registered Sep 13 00:42:36.025211 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:42:36.025217 kernel: io scheduler mq-deadline registered Sep 13 00:42:36.025222 kernel: io scheduler kyber registered Sep 13 00:42:36.025228 kernel: io scheduler bfq registered Sep 13 00:42:36.025277 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Sep 13 00:42:36.025328 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Sep 13 00:42:36.025380 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Sep 13 00:42:36.025429 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Sep 13 00:42:36.025484 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Sep 13 00:42:36.025535 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Sep 13 00:42:36.025589 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 13 00:42:36.025598 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 13 00:42:36.025604 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 13 00:42:36.025612 kernel: pstore: Using crash dump compression: deflate Sep 13 00:42:36.025617 kernel: pstore: Registered erst as persistent store backend Sep 13 00:42:36.025623 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:42:36.025629 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:42:36.025635 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:42:36.025641 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 13 00:42:36.025647 kernel: hpet_acpi_add: no address or irqs in _CRS Sep 13 00:42:36.025697 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 13 00:42:36.025708 kernel: i8042: PNP: No PS/2 controller found. Sep 13 00:42:36.025752 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 13 00:42:36.025799 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 13 00:42:36.025846 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-09-13T00:42:34 UTC (1757724154) Sep 13 00:42:36.025892 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 13 00:42:36.025900 kernel: intel_pstate: Intel P-state driver initializing Sep 13 00:42:36.025906 kernel: intel_pstate: Disabling energy efficiency optimization Sep 13 00:42:36.025912 kernel: intel_pstate: HWP enabled Sep 13 00:42:36.025920 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Sep 13 00:42:36.025926 kernel: vesafb: scrolling: redraw Sep 13 00:42:36.025931 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Sep 13 00:42:36.025937 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000005e393bb2, using 768k, total 768k Sep 13 00:42:36.025943 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:42:36.025949 kernel: fb0: VESA VGA frame buffer device Sep 13 00:42:36.025955 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:42:36.025960 kernel: Segment Routing with IPv6 Sep 13 00:42:36.025966 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:42:36.025972 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:42:36.025979 kernel: Key type dns_resolver registered Sep 13 00:42:36.025984 kernel: microcode: Current revision: 0x000000fc Sep 13 00:42:36.025990 kernel: microcode: Updated early from: 0x000000f4 Sep 13 00:42:36.025996 kernel: microcode: Microcode Update Driver: v2.2. Sep 13 00:42:36.026001 kernel: IPI shorthand broadcast: enabled Sep 13 00:42:36.026007 kernel: sched_clock: Marking stable (1572001033, 1379367810)->(4421608757, -1470239914) Sep 13 00:42:36.026013 kernel: registered taskstats version 1 Sep 13 00:42:36.026019 kernel: Loading compiled-in X.509 certificates Sep 13 00:42:36.026025 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:42:36.026031 kernel: Key type .fscrypt registered Sep 13 00:42:36.026037 kernel: Key type fscrypt-provisioning registered Sep 13 00:42:36.026043 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:42:36.026048 kernel: ima: No architecture policies found Sep 13 00:42:36.026054 kernel: clk: Disabling unused clocks Sep 13 00:42:36.026060 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:42:36.026066 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:42:36.026072 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:42:36.026078 kernel: Run /init as init process Sep 13 00:42:36.026084 kernel: with arguments: Sep 13 00:42:36.026090 kernel: /init Sep 13 00:42:36.026096 kernel: with environment: Sep 13 00:42:36.026101 kernel: HOME=/ Sep 13 00:42:36.026107 kernel: TERM=linux Sep 13 00:42:36.026113 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:42:36.026120 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:42:36.026128 systemd[1]: Detected architecture x86-64. Sep 13 00:42:36.026134 systemd[1]: Running in initrd. Sep 13 00:42:36.026140 systemd[1]: No hostname configured, using default hostname. Sep 13 00:42:36.026145 systemd[1]: Hostname set to . Sep 13 00:42:36.026151 systemd[1]: Initializing machine ID from random generator. Sep 13 00:42:36.026157 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:42:36.026163 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:42:36.026169 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:42:36.026177 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:42:36.026183 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:42:36.026189 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:42:36.026195 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:42:36.026201 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:42:36.026208 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Sep 13 00:42:36.026214 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:42:36.026220 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Sep 13 00:42:36.026226 kernel: clocksource: Switched to clocksource tsc Sep 13 00:42:36.026232 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:42:36.026238 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:42:36.026244 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:42:36.026250 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:42:36.026256 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:42:36.026262 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:42:36.026269 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:42:36.026275 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:42:36.026281 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:42:36.026287 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:42:36.026293 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:42:36.026299 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:42:36.026305 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:42:36.026311 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:42:36.026317 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:42:36.026324 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:42:36.026330 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:42:36.026336 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:42:36.026342 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:42:36.026358 systemd-journald[266]: Collecting audit messages is disabled. Sep 13 00:42:36.026374 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:42:36.026380 systemd-journald[266]: Journal started Sep 13 00:42:36.026393 systemd-journald[266]: Runtime Journal (/run/log/journal/e81ad24ecb5142c78b1fce2a2184a42f) is 8.0M, max 639.9M, 631.9M free. Sep 13 00:42:36.039972 systemd-modules-load[268]: Inserted module 'overlay' Sep 13 00:42:36.087113 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:42:36.087150 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:42:36.130500 kernel: Bridge firewalling registered Sep 13 00:42:36.130518 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:42:36.148717 systemd-modules-load[268]: Inserted module 'br_netfilter' Sep 13 00:42:36.148987 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:42:36.149097 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:42:36.149186 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:42:36.149271 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:42:36.163882 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:42:36.248714 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:42:36.261170 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:42:36.275105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:42:36.308558 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:42:36.329137 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:42:36.351202 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:42:36.389760 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:42:36.402366 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:42:36.415150 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:42:36.421595 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:42:36.425112 systemd-resolved[295]: Positive Trust Anchors: Sep 13 00:42:36.425122 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:42:36.425172 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:42:36.427444 systemd-resolved[295]: Defaulting to hostname 'linux'. Sep 13 00:42:36.428186 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:42:36.455829 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:42:36.466724 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:42:36.507089 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:42:36.535899 dracut-cmdline[308]: dracut-dracut-053 Sep 13 00:42:36.543710 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:42:36.744467 kernel: SCSI subsystem initialized Sep 13 00:42:36.768493 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:42:36.791496 kernel: iscsi: registered transport (tcp) Sep 13 00:42:36.824347 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:42:36.824367 kernel: QLogic iSCSI HBA Driver Sep 13 00:42:36.857463 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:42:36.892717 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:42:36.953785 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:42:36.953806 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:42:36.973641 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:42:37.031534 kernel: raid6: avx2x4 gen() 52141 MB/s Sep 13 00:42:37.063533 kernel: raid6: avx2x2 gen() 52438 MB/s Sep 13 00:42:37.100616 kernel: raid6: avx2x1 gen() 43961 MB/s Sep 13 00:42:37.100635 kernel: raid6: using algorithm avx2x2 gen() 52438 MB/s Sep 13 00:42:37.148135 kernel: raid6: .... xor() 31144 MB/s, rmw enabled Sep 13 00:42:37.148154 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:42:37.189516 kernel: xor: automatically using best checksumming function avx Sep 13 00:42:37.303506 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:42:37.309374 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:42:37.337802 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:42:37.345096 systemd-udevd[493]: Using default interface naming scheme 'v255'. Sep 13 00:42:37.348594 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:42:37.383698 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:42:37.428474 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Sep 13 00:42:37.445672 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:42:37.470717 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:42:37.531618 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:42:37.564587 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:42:37.564643 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:42:37.591477 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:42:37.593713 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:42:37.618917 kernel: libata version 3.00 loaded. Sep 13 00:42:37.618932 kernel: ACPI: bus type USB registered Sep 13 00:42:37.618942 kernel: usbcore: registered new interface driver usbfs Sep 13 00:42:37.595101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:42:37.658874 kernel: usbcore: registered new interface driver hub Sep 13 00:42:37.658898 kernel: usbcore: registered new device driver usb Sep 13 00:42:37.595133 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:42:37.675418 kernel: PTP clock support registered Sep 13 00:42:37.672512 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:42:37.681397 kernel: ahci 0000:00:17.0: version 3.0 Sep 13 00:42:37.681506 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:42:37.681516 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Sep 13 00:42:37.719553 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 13 00:42:37.727559 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:42:37.746130 kernel: AES CTR mode by8 optimization enabled Sep 13 00:42:37.727603 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:42:38.141835 kernel: scsi host0: ahci Sep 13 00:42:38.141967 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 13 00:42:38.142059 kernel: scsi host1: ahci Sep 13 00:42:38.142144 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 13 00:42:38.142260 kernel: scsi host2: ahci Sep 13 00:42:38.142391 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 13 00:42:38.142525 kernel: scsi host3: ahci Sep 13 00:42:38.142653 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 13 00:42:38.142781 kernel: scsi host4: ahci Sep 13 00:42:38.142906 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 13 00:42:38.143033 kernel: scsi host5: ahci Sep 13 00:42:38.143208 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 13 00:42:38.143327 kernel: scsi host6: ahci Sep 13 00:42:38.143440 kernel: hub 1-0:1.0: USB hub found Sep 13 00:42:38.143584 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Sep 13 00:42:38.143602 kernel: hub 1-0:1.0: 16 ports detected Sep 13 00:42:38.143724 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Sep 13 00:42:38.143742 kernel: hub 2-0:1.0: USB hub found Sep 13 00:42:38.143855 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Sep 13 00:42:38.143869 kernel: hub 2-0:1.0: 10 ports detected Sep 13 00:42:38.143972 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Sep 13 00:42:38.143989 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Sep 13 00:42:38.144002 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 13 00:42:38.144026 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Sep 13 00:42:38.144039 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Sep 13 00:42:37.745579 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:42:38.203543 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 13 00:42:38.203558 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 13 00:42:38.132043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:42:38.351663 kernel: igb 0000:03:00.0: added PHC on eth0 Sep 13 00:42:38.352173 kernel: hub 1-14:1.0: USB hub found Sep 13 00:42:38.352615 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 13 00:42:38.352979 kernel: hub 1-14:1.0: 4 ports detected Sep 13 00:42:38.353330 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:9e Sep 13 00:42:38.353747 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Sep 13 00:42:38.354010 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 13 00:42:38.354306 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Sep 13 00:42:38.354583 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 13 00:42:38.141925 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:42:38.509392 kernel: igb 0000:04:00.0: added PHC on eth1 Sep 13 00:42:38.509632 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 13 00:42:38.509705 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:ef:9f Sep 13 00:42:38.509771 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Sep 13 00:42:38.509835 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 13 00:42:38.509900 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:38.509908 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:38.509916 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:38.509923 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:38.509931 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 13 00:42:38.193989 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:42:38.730625 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 13 00:42:38.730705 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 00:42:38.730724 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 13 00:42:38.730844 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 13 00:42:38.730853 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 13 00:42:38.730863 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 13 00:42:38.730870 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 13 00:42:38.730944 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 13 00:42:38.730952 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Sep 13 00:42:38.731016 kernel: ata1.00: Features: NCQ-prio Sep 13 00:42:38.731024 kernel: ata2.00: Features: NCQ-prio Sep 13 00:42:38.731031 kernel: ata2.00: configured for UDMA/133 Sep 13 00:42:38.731038 kernel: ata1.00: configured for UDMA/133 Sep 13 00:42:38.731045 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 13 00:42:38.258560 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:42:38.749477 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 13 00:42:38.286586 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:42:38.608567 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:42:38.775467 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:42:38.775512 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Sep 13 00:42:38.820114 kernel: usbcore: registered new interface driver usbhid Sep 13 00:42:38.820132 kernel: usbhid: USB HID core driver Sep 13 00:42:38.823516 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 13 00:42:38.823617 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Sep 13 00:42:38.861590 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:42:38.948578 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 13 00:42:38.948593 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Sep 13 00:42:38.948686 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 13 00:42:38.948757 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 13 00:42:38.948839 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 00:42:38.891741 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:42:39.470134 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 13 00:42:39.470152 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 13 00:42:39.470241 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:42:39.470250 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 13 00:42:39.470321 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Sep 13 00:42:39.470384 kernel: sd 1:0:0:0: [sdb] Write Protect is off Sep 13 00:42:39.470466 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 13 00:42:39.470552 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:42:39.470612 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Sep 13 00:42:39.470671 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:42:39.470679 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:42:39.470687 kernel: GPT:9289727 != 937703087 Sep 13 00:42:39.470696 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:42:39.470703 kernel: GPT:9289727 != 937703087 Sep 13 00:42:39.470709 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:42:39.470716 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:42:39.470723 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Sep 13 00:42:39.470783 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 13 00:42:39.470858 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 13 00:42:39.470917 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 13 00:42:39.470994 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 13 00:42:39.471056 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Sep 13 00:42:39.471129 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 13 00:42:39.471191 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 13 00:42:39.471249 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Sep 13 00:42:39.471309 kernel: ata1.00: Enabling discard_zeroes_data Sep 13 00:42:39.471317 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 13 00:42:39.471381 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 13 00:42:39.471441 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Sep 13 00:42:39.476750 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:42:39.484467 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Sep 13 00:42:39.484581 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/sdb3 scanned by (udev-worker) (542) Sep 13 00:42:39.498280 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Sep 13 00:42:39.580643 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (548) Sep 13 00:42:39.566842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:42:39.596180 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Sep 13 00:42:39.625895 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 13 00:42:39.636659 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 13 00:42:39.666003 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 13 00:42:39.709640 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:42:39.750601 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:42:39.750618 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:42:39.750630 disk-uuid[720]: Primary Header is updated. Sep 13 00:42:39.750630 disk-uuid[720]: Secondary Entries is updated. Sep 13 00:42:39.750630 disk-uuid[720]: Secondary Header is updated. Sep 13 00:42:39.807550 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:42:39.807563 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:42:39.807572 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:42:39.837467 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:42:40.814255 kernel: ata2.00: Enabling discard_zeroes_data Sep 13 00:42:40.834300 disk-uuid[721]: The operation has completed successfully. Sep 13 00:42:40.842587 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 13 00:42:40.870131 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:42:40.870195 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:42:40.905727 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:42:40.932644 sh[738]: Success Sep 13 00:42:40.941559 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:42:40.990002 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:42:41.007804 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:42:41.018659 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:42:41.063569 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:42:41.063584 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:42:41.078970 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:42:41.097876 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:42:41.115474 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:42:41.154510 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 13 00:42:41.157166 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:42:41.165898 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:42:41.179727 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:42:41.283791 kernel: BTRFS info (device sdb6): first mount of filesystem e234c3ec-3f80-42e7-b5d0-d61480d74075 Sep 13 00:42:41.283804 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:42:41.283812 kernel: BTRFS info (device sdb6): using free space tree Sep 13 00:42:41.283820 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 13 00:42:41.283830 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 13 00:42:41.306506 kernel: BTRFS info (device sdb6): last unmount of filesystem e234c3ec-3f80-42e7-b5d0-d61480d74075 Sep 13 00:42:41.316955 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:42:41.329395 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:42:41.363969 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:42:41.404956 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:42:41.433659 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:42:41.445026 systemd-networkd[923]: lo: Link UP Sep 13 00:42:41.443014 ignition[819]: Ignition 2.19.0 Sep 13 00:42:41.445028 systemd-networkd[923]: lo: Gained carrier Sep 13 00:42:41.443018 ignition[819]: Stage: fetch-offline Sep 13 00:42:41.445319 unknown[819]: fetched base config from "system" Sep 13 00:42:41.443042 ignition[819]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:41.445338 unknown[819]: fetched user config from "system" Sep 13 00:42:41.443047 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:42:41.446791 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:42:41.443104 ignition[819]: parsed url from cmdline: "" Sep 13 00:42:41.447615 systemd-networkd[923]: Enumeration completed Sep 13 00:42:41.443106 ignition[819]: no config URL provided Sep 13 00:42:41.448800 systemd-networkd[923]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:42:41.443108 ignition[819]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:42:41.465963 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:42:41.443131 ignition[819]: parsing config with SHA512: 87119c394706ec875dde59d2a5253b5552ce4067b113cb732224f319925b871b69c946a4a409dc012147818c7af0452ee413a631529a9e9a4fe3f7b482322cf5 Sep 13 00:42:41.478452 systemd-networkd[923]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:42:41.445974 ignition[819]: fetch-offline: fetch-offline passed Sep 13 00:42:41.483048 systemd[1]: Reached target network.target - Network. Sep 13 00:42:41.445977 ignition[819]: POST message to Packet Timeline Sep 13 00:42:41.489877 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:42:41.445980 ignition[819]: POST Status error: resource requires networking Sep 13 00:42:41.509136 systemd-networkd[923]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:42:41.446020 ignition[819]: Ignition finished successfully Sep 13 00:42:41.703563 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Sep 13 00:42:41.512788 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:42:41.541793 ignition[937]: Ignition 2.19.0 Sep 13 00:42:41.700085 systemd-networkd[923]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:42:41.541806 ignition[937]: Stage: kargs Sep 13 00:42:41.542147 ignition[937]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:41.542170 ignition[937]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:42:41.543960 ignition[937]: kargs: kargs passed Sep 13 00:42:41.543969 ignition[937]: POST message to Packet Timeline Sep 13 00:42:41.543995 ignition[937]: GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:42:41.545541 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50975->[::1]:53: read: connection refused Sep 13 00:42:41.745747 ignition[937]: GET https://metadata.packet.net/metadata: attempt #2 Sep 13 00:42:41.746957 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:33689->[::1]:53: read: connection refused Sep 13 00:42:41.906500 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Sep 13 00:42:41.909602 systemd-networkd[923]: eno1: Link UP Sep 13 00:42:41.909808 systemd-networkd[923]: eno2: Link UP Sep 13 00:42:41.910001 systemd-networkd[923]: enp1s0f0np0: Link UP Sep 13 00:42:41.910228 systemd-networkd[923]: enp1s0f0np0: Gained carrier Sep 13 00:42:41.927740 systemd-networkd[923]: enp1s0f1np1: Link UP Sep 13 00:42:41.965703 systemd-networkd[923]: enp1s0f0np0: DHCPv4 address 139.178.94.199/31, gateway 139.178.94.198 acquired from 145.40.83.140 Sep 13 00:42:42.147383 ignition[937]: GET https://metadata.packet.net/metadata: attempt #3 Sep 13 00:42:42.148486 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46539->[::1]:53: read: connection refused Sep 13 00:42:42.709322 systemd-networkd[923]: enp1s0f1np1: Gained carrier Sep 13 00:42:42.948936 ignition[937]: GET https://metadata.packet.net/metadata: attempt #4 Sep 13 00:42:42.950212 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35729->[::1]:53: read: connection refused Sep 13 00:42:43.669092 systemd-networkd[923]: enp1s0f0np0: Gained IPv6LL Sep 13 00:42:44.181114 systemd-networkd[923]: enp1s0f1np1: Gained IPv6LL Sep 13 00:42:44.551846 ignition[937]: GET https://metadata.packet.net/metadata: attempt #5 Sep 13 00:42:44.553044 ignition[937]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38371->[::1]:53: read: connection refused Sep 13 00:42:47.756619 ignition[937]: GET https://metadata.packet.net/metadata: attempt #6 Sep 13 00:42:48.825206 ignition[937]: GET result: OK Sep 13 00:42:49.262748 ignition[937]: Ignition finished successfully Sep 13 00:42:49.267774 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:42:49.296708 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:42:49.302971 ignition[955]: Ignition 2.19.0 Sep 13 00:42:49.302975 ignition[955]: Stage: disks Sep 13 00:42:49.303083 ignition[955]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:49.303090 ignition[955]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:42:49.303624 ignition[955]: disks: disks passed Sep 13 00:42:49.303627 ignition[955]: POST message to Packet Timeline Sep 13 00:42:49.303636 ignition[955]: GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:42:50.240387 ignition[955]: GET result: OK Sep 13 00:42:50.660333 ignition[955]: Ignition finished successfully Sep 13 00:42:50.663653 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:42:50.679711 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:42:50.697720 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:42:50.718751 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:42:50.740880 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:42:50.761876 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:42:50.791730 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:42:50.832083 systemd-fsck[971]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:42:50.841971 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:42:50.852696 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:42:50.972277 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:42:50.972466 kernel: EXT4-fs (sdb9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:42:50.986338 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:42:51.012670 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:42:51.021213 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:42:51.140434 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (980) Sep 13 00:42:51.140448 kernel: BTRFS info (device sdb6): first mount of filesystem e234c3ec-3f80-42e7-b5d0-d61480d74075 Sep 13 00:42:51.140456 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:42:51.140468 kernel: BTRFS info (device sdb6): using free space tree Sep 13 00:42:51.140515 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 13 00:42:51.140522 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 13 00:42:51.063104 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:42:51.154842 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Sep 13 00:42:51.177598 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:42:51.177619 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:42:51.238666 coreos-metadata[998]: Sep 13 00:42:51.215 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 00:42:51.201418 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:42:51.266635 coreos-metadata[982]: Sep 13 00:42:51.215 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 00:42:51.228816 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:42:51.263698 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:42:51.307591 initrd-setup-root[1012]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:42:51.317647 initrd-setup-root[1019]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:42:51.327556 initrd-setup-root[1026]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:42:51.337578 initrd-setup-root[1033]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:42:51.361369 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:42:51.384753 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:42:51.419701 kernel: BTRFS info (device sdb6): last unmount of filesystem e234c3ec-3f80-42e7-b5d0-d61480d74075 Sep 13 00:42:51.402967 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:42:51.430189 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:42:51.452756 ignition[1100]: INFO : Ignition 2.19.0 Sep 13 00:42:51.452756 ignition[1100]: INFO : Stage: mount Sep 13 00:42:51.460601 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:51.460601 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:42:51.460601 ignition[1100]: INFO : mount: mount passed Sep 13 00:42:51.460601 ignition[1100]: INFO : POST message to Packet Timeline Sep 13 00:42:51.460601 ignition[1100]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:42:51.459267 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:42:52.259626 coreos-metadata[982]: Sep 13 00:42:52.259 INFO Fetch successful Sep 13 00:42:52.271033 coreos-metadata[998]: Sep 13 00:42:52.270 INFO Fetch successful Sep 13 00:42:52.303469 coreos-metadata[982]: Sep 13 00:42:52.303 INFO wrote hostname ci-4081.3.5-n-2af8d06a22 to /sysroot/etc/hostname Sep 13 00:42:52.304623 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:42:52.330811 systemd[1]: flatcar-static-network.service: Deactivated successfully. Sep 13 00:42:52.330858 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Sep 13 00:42:52.409334 ignition[1100]: INFO : GET result: OK Sep 13 00:42:53.045920 ignition[1100]: INFO : Ignition finished successfully Sep 13 00:42:53.048916 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:42:53.080715 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:42:53.091784 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:42:53.146466 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by mount (1127) Sep 13 00:42:53.175411 kernel: BTRFS info (device sdb6): first mount of filesystem e234c3ec-3f80-42e7-b5d0-d61480d74075 Sep 13 00:42:53.175427 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:42:53.192687 kernel: BTRFS info (device sdb6): using free space tree Sep 13 00:42:53.229932 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 13 00:42:53.229949 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 13 00:42:53.243058 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:42:53.267982 ignition[1144]: INFO : Ignition 2.19.0 Sep 13 00:42:53.267982 ignition[1144]: INFO : Stage: files Sep 13 00:42:53.281745 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:53.281745 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:42:53.281745 ignition[1144]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:42:53.281745 ignition[1144]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:42:53.281745 ignition[1144]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:42:53.281745 ignition[1144]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:42:53.281745 ignition[1144]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:42:53.281745 ignition[1144]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:42:53.281745 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:42:53.281745 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:42:53.271903 unknown[1144]: wrote ssh authorized keys file for user: core Sep 13 00:42:53.415559 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:42:53.471999 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:42:53.488694 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:42:54.015409 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 00:42:54.404255 ignition[1144]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:42:54.404255 ignition[1144]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:42:54.434908 ignition[1144]: INFO : files: files passed Sep 13 00:42:54.434908 ignition[1144]: INFO : POST message to Packet Timeline Sep 13 00:42:54.434908 ignition[1144]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:42:55.263501 ignition[1144]: INFO : GET result: OK Sep 13 00:42:55.687973 ignition[1144]: INFO : Ignition finished successfully Sep 13 00:42:55.690925 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:42:55.719780 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:42:55.720193 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:42:55.749006 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:42:55.749088 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:42:55.782858 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:42:55.798983 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:42:55.830717 initrd-setup-root-after-ignition[1184]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:42:55.830717 initrd-setup-root-after-ignition[1184]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:42:55.844687 initrd-setup-root-after-ignition[1189]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:42:55.839575 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:42:55.938699 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:42:55.938755 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:42:55.957868 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:42:55.978706 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:42:55.998951 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:42:56.012862 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:42:56.087743 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:42:56.113879 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:42:56.131426 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:42:56.135652 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:42:56.167781 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:42:56.185801 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:42:56.185956 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:42:56.214212 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:42:56.235031 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:42:56.252689 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:42:56.270746 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:42:56.291828 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:42:56.314070 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:42:56.335050 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:42:56.357118 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:42:56.379207 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:42:56.400182 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:42:56.420072 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:42:56.420509 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:42:56.446220 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:42:56.466105 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:42:56.487059 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:42:56.487521 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:42:56.509956 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:42:56.510352 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:42:56.542065 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:42:56.542547 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:42:56.562284 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:42:56.581941 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:42:56.582431 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:42:56.603090 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:42:56.622081 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:42:56.640048 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:42:56.640363 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:42:56.660082 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:42:56.660384 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:42:56.684134 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:42:56.684563 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:42:56.704156 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:42:56.805868 ignition[1211]: INFO : Ignition 2.19.0 Sep 13 00:42:56.805868 ignition[1211]: INFO : Stage: umount Sep 13 00:42:56.805868 ignition[1211]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:42:56.805868 ignition[1211]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 13 00:42:56.805868 ignition[1211]: INFO : umount: umount passed Sep 13 00:42:56.805868 ignition[1211]: INFO : POST message to Packet Timeline Sep 13 00:42:56.805868 ignition[1211]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 13 00:42:56.704563 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:42:56.722112 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:42:56.722515 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:42:56.752687 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:42:56.772597 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:42:56.772725 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:42:56.805789 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:42:56.820692 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:42:56.821111 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:42:56.827808 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:42:56.827950 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:42:56.878297 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:42:56.878675 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:42:56.878725 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:42:56.897679 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:42:56.897742 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:42:57.861459 ignition[1211]: INFO : GET result: OK Sep 13 00:42:58.686853 ignition[1211]: INFO : Ignition finished successfully Sep 13 00:42:58.690115 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:42:58.690406 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:42:58.707752 systemd[1]: Stopped target network.target - Network. Sep 13 00:42:58.724718 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:42:58.724891 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:42:58.743855 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:42:58.744020 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:42:58.763983 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:42:58.764141 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:42:58.782985 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:42:58.783150 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:42:58.801981 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:42:58.802152 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:42:58.821233 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:42:58.832612 systemd-networkd[923]: enp1s0f1np1: DHCPv6 lease lost Sep 13 00:42:58.840723 systemd-networkd[923]: enp1s0f0np0: DHCPv6 lease lost Sep 13 00:42:58.840956 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:42:58.859571 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:42:58.859861 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:42:58.878863 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:42:58.879241 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:42:58.900135 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:42:58.900255 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:42:58.931669 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:42:58.959614 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:42:58.959655 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:42:58.978702 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:42:58.978768 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:42:58.997849 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:42:58.998002 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:42:59.018811 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:42:59.018969 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:42:59.038101 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:42:59.060581 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:42:59.060960 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:42:59.092543 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:42:59.092686 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:42:59.098983 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:42:59.099095 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:42:59.126780 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:42:59.126938 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:42:59.157673 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:42:59.157845 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:42:59.186684 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:42:59.186861 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:42:59.230604 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:42:59.267603 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:42:59.487700 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). Sep 13 00:42:59.267638 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:42:59.286587 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:42:59.286630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:42:59.309410 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:42:59.309598 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:42:59.357114 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:42:59.357399 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:42:59.373582 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:42:59.407927 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:42:59.434800 systemd[1]: Switching root. Sep 13 00:42:59.581646 systemd-journald[266]: Journal stopped Sep 13 00:43:02.253436 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:43:02.253451 kernel: SELinux: policy capability open_perms=1 Sep 13 00:43:02.253458 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:43:02.253468 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:43:02.253474 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:43:02.253479 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:43:02.253485 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:43:02.253490 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:43:02.253496 kernel: audit: type=1403 audit(1757724179.805:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:43:02.253503 systemd[1]: Successfully loaded SELinux policy in 159.680ms. Sep 13 00:43:02.253510 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.279ms. Sep 13 00:43:02.253517 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:43:02.253523 systemd[1]: Detected architecture x86-64. Sep 13 00:43:02.253529 systemd[1]: Detected first boot. Sep 13 00:43:02.253536 systemd[1]: Hostname set to . Sep 13 00:43:02.253545 systemd[1]: Initializing machine ID from random generator. Sep 13 00:43:02.253552 zram_generator::config[1263]: No configuration found. Sep 13 00:43:02.253558 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:43:02.253564 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:43:02.253570 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 00:43:02.253577 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:43:02.253583 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:43:02.253590 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:43:02.253597 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:43:02.253603 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:43:02.253610 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:43:02.253616 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:43:02.253623 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:43:02.253629 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:43:02.253637 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:43:02.253644 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:43:02.253650 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:43:02.253657 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:43:02.253663 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:43:02.253670 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:43:02.253676 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Sep 13 00:43:02.253683 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:43:02.253690 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 00:43:02.253697 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 00:43:02.253704 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 00:43:02.253712 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:43:02.253719 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:43:02.253725 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:43:02.253732 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:43:02.253739 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:43:02.253746 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:43:02.253753 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:43:02.253759 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:43:02.253766 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:43:02.253773 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:43:02.253781 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:43:02.253788 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:43:02.253794 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:43:02.253801 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:43:02.253808 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:43:02.253815 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:43:02.253821 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:43:02.253830 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:43:02.253837 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:43:02.253844 systemd[1]: Reached target machines.target - Containers. Sep 13 00:43:02.253851 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:43:02.253858 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:43:02.253865 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:43:02.253872 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:43:02.253878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:43:02.253885 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:43:02.253893 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:43:02.253899 kernel: ACPI: bus type drm_connector registered Sep 13 00:43:02.253906 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:43:02.253912 kernel: fuse: init (API version 7.39) Sep 13 00:43:02.253918 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:43:02.253925 kernel: loop: module loaded Sep 13 00:43:02.253931 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:43:02.253938 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:43:02.253946 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 00:43:02.253953 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:43:02.253959 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:43:02.253966 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:43:02.253981 systemd-journald[1366]: Collecting audit messages is disabled. Sep 13 00:43:02.253997 systemd-journald[1366]: Journal started Sep 13 00:43:02.254011 systemd-journald[1366]: Runtime Journal (/run/log/journal/7e9c01765013449fb225ebc3b0c62b3c) is 8.0M, max 639.9M, 631.9M free. Sep 13 00:43:00.349339 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:43:00.369257 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Sep 13 00:43:00.369925 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:43:02.293782 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:43:02.293803 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:43:02.341466 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:43:02.383501 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:43:02.416892 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:43:02.416915 systemd[1]: Stopped verity-setup.service. Sep 13 00:43:02.479497 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:43:02.500646 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:43:02.510052 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:43:02.520743 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:43:02.530730 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:43:02.540714 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:43:02.550701 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:43:02.560692 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:43:02.570828 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:43:02.582019 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:43:02.595155 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:43:02.595413 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:43:02.608336 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:43:02.608696 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:43:02.621425 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:43:02.621791 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:43:02.632334 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:43:02.632694 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:43:02.644330 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:43:02.644747 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:43:02.655325 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:43:02.655696 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:43:02.666340 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:43:02.677314 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:43:02.689298 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:43:02.701300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:43:02.735537 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:43:02.758685 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:43:02.769251 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:43:02.778649 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:43:02.778669 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:43:02.789383 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:43:02.811705 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:43:02.824713 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:43:02.834754 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:43:02.835668 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:43:02.846108 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:43:02.856573 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:43:02.857282 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:43:02.861276 systemd-journald[1366]: Time spent on flushing to /var/log/journal/7e9c01765013449fb225ebc3b0c62b3c is 16.118ms for 1368 entries. Sep 13 00:43:02.861276 systemd-journald[1366]: System Journal (/var/log/journal/7e9c01765013449fb225ebc3b0c62b3c) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:43:02.909652 systemd-journald[1366]: Received client request to flush runtime journal. Sep 13 00:43:02.874605 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:43:02.875287 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:43:02.893360 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:43:02.913038 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:43:02.929205 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:43:02.936501 kernel: loop0: detected capacity change from 0 to 140768 Sep 13 00:43:02.937542 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:43:02.958673 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:43:02.976724 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:43:02.981471 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:43:02.991735 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:43:03.002684 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:43:03.013682 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:43:03.029671 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:43:03.040534 kernel: loop1: detected capacity change from 0 to 142488 Sep 13 00:43:03.053565 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:43:03.072641 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:43:03.084181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:43:03.095087 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:43:03.095509 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:43:03.106518 systemd-tmpfiles[1415]: ACLs are not supported, ignoring. Sep 13 00:43:03.106529 systemd-tmpfiles[1415]: ACLs are not supported, ignoring. Sep 13 00:43:03.112867 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:43:03.123519 kernel: loop2: detected capacity change from 0 to 221472 Sep 13 00:43:03.134312 udevadm[1402]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 13 00:43:03.198529 kernel: loop3: detected capacity change from 0 to 8 Sep 13 00:43:03.231131 ldconfig[1392]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:43:03.232390 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:43:03.256472 kernel: loop4: detected capacity change from 0 to 140768 Sep 13 00:43:03.306373 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:43:03.316467 kernel: loop5: detected capacity change from 0 to 142488 Sep 13 00:43:03.351508 kernel: loop6: detected capacity change from 0 to 221472 Sep 13 00:43:03.351578 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:43:03.363784 systemd-udevd[1424]: Using default interface naming scheme 'v255'. Sep 13 00:43:03.381236 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:43:03.382466 kernel: loop7: detected capacity change from 0 to 8 Sep 13 00:43:03.382763 (sd-merge)[1421]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Sep 13 00:43:03.383053 (sd-merge)[1421]: Merged extensions into '/usr'. Sep 13 00:43:03.401778 systemd[1]: Reloading requested from client PID 1397 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:43:03.401786 systemd[1]: Reloading... Sep 13 00:43:03.424071 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 36 scanned by (udev-worker) (1441) Sep 13 00:43:03.424107 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Sep 13 00:43:03.448773 kernel: ACPI: button: Sleep Button [SLPB] Sep 13 00:43:03.466767 zram_generator::config[1534]: No configuration found. Sep 13 00:43:03.466833 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 00:43:03.499486 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:43:03.505505 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:43:03.550479 kernel: IPMI message handler: version 39.2 Sep 13 00:43:03.582472 kernel: ipmi device interface Sep 13 00:43:03.596675 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:43:03.606469 kernel: ipmi_si: IPMI System Interface driver Sep 13 00:43:03.606494 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Sep 13 00:43:03.606609 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Sep 13 00:43:03.606692 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Sep 13 00:43:03.606772 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Sep 13 00:43:03.607466 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Sep 13 00:43:03.618469 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Sep 13 00:43:03.662976 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 13 00:43:03.724465 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Sep 13 00:43:03.724491 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Sep 13 00:43:03.748655 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Sep 13 00:43:03.748751 systemd[1]: Reloading finished in 346 ms. Sep 13 00:43:03.759468 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Sep 13 00:43:03.802912 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Sep 13 00:43:03.803012 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Sep 13 00:43:03.819086 kernel: ipmi_si: Adding ACPI-specified kcs state machine Sep 13 00:43:03.839169 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Sep 13 00:43:03.865466 kernel: iTCO_vendor_support: vendor-support=0 Sep 13 00:43:03.865494 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Sep 13 00:43:03.942471 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Sep 13 00:43:03.969975 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Sep 13 00:43:03.970385 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Sep 13 00:43:03.994371 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:43:04.006514 kernel: intel_rapl_common: Found RAPL domain package Sep 13 00:43:04.006550 kernel: intel_rapl_common: Found RAPL domain core Sep 13 00:43:04.022129 kernel: intel_rapl_common: Found RAPL domain dram Sep 13 00:43:04.066489 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Sep 13 00:43:04.067573 systemd[1]: Starting ensure-sysext.service... Sep 13 00:43:04.080051 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:43:04.084470 kernel: ipmi_ssif: IPMI SSIF Interface driver Sep 13 00:43:04.095351 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:43:04.105058 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:43:04.105605 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:43:04.105852 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:43:04.108221 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:43:04.145909 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:43:04.153970 lvm[1610]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:43:04.156128 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:43:04.156935 systemd[1]: Reloading requested from client PID 1603 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:43:04.156951 systemd[1]: Reloading... Sep 13 00:43:04.157125 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:43:04.157501 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:43:04.158213 systemd-tmpfiles[1607]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:43:04.158478 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Sep 13 00:43:04.158541 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Sep 13 00:43:04.160413 systemd-tmpfiles[1607]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:43:04.160417 systemd-tmpfiles[1607]: Skipping /boot Sep 13 00:43:04.166342 systemd-tmpfiles[1607]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:43:04.166346 systemd-tmpfiles[1607]: Skipping /boot Sep 13 00:43:04.195475 zram_generator::config[1647]: No configuration found. Sep 13 00:43:04.221413 systemd-networkd[1605]: lo: Link UP Sep 13 00:43:04.221417 systemd-networkd[1605]: lo: Gained carrier Sep 13 00:43:04.223903 systemd-networkd[1605]: bond0: netdev ready Sep 13 00:43:04.224879 systemd-networkd[1605]: Enumeration completed Sep 13 00:43:04.228341 systemd-networkd[1605]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:42:d4.network. Sep 13 00:43:04.253673 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:43:04.312603 systemd[1]: Reloading finished in 155 ms. Sep 13 00:43:04.329272 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:43:04.338586 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:43:04.354680 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:43:04.365698 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:43:04.376667 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:43:04.390968 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:43:04.411723 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:43:04.422546 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:43:04.433740 augenrules[1729]: No rules Sep 13 00:43:04.434255 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:43:04.446330 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:43:04.448282 lvm[1734]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:43:04.458393 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:43:04.470724 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:43:04.481269 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:43:04.493302 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:43:04.502772 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:43:04.513762 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:43:04.524772 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:43:04.538749 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:43:04.538894 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:43:04.549264 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:43:04.559217 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:43:04.571203 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:43:04.578918 systemd-resolved[1737]: Positive Trust Anchors: Sep 13 00:43:04.578923 systemd-resolved[1737]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:43:04.578946 systemd-resolved[1737]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:43:04.581616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:43:04.582187 systemd-resolved[1737]: Using system hostname 'ci-4081.3.5-n-2af8d06a22'. Sep 13 00:43:04.582487 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:43:04.592690 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:43:04.592789 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:43:04.593863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:43:04.593947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:43:04.610887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:43:04.610964 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:43:04.613466 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Sep 13 00:43:04.632855 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:43:04.632931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:43:04.637414 systemd-networkd[1605]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:42:d5.network. Sep 13 00:43:04.637471 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Sep 13 00:43:04.646836 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:43:04.658519 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:43:04.672878 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:43:04.673016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:43:04.686628 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:43:04.698490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:43:04.710855 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:43:04.720741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:43:04.720975 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:43:04.721165 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:43:04.723171 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:43:04.723521 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:43:04.736064 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:43:04.736460 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:43:04.751008 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:43:04.751075 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:43:04.762836 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:43:04.762967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:43:04.778875 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:43:04.789572 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:43:04.804471 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Sep 13 00:43:04.822503 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:43:04.826253 systemd-networkd[1605]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Sep 13 00:43:04.826473 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Sep 13 00:43:04.827775 systemd-networkd[1605]: enp1s0f0np0: Link UP Sep 13 00:43:04.828039 systemd-networkd[1605]: enp1s0f0np0: Gained carrier Sep 13 00:43:04.848509 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 13 00:43:04.848504 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:43:04.851438 systemd-networkd[1605]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:42:d4.network. Sep 13 00:43:04.851694 systemd-networkd[1605]: enp1s0f1np1: Link UP Sep 13 00:43:04.851945 systemd-networkd[1605]: enp1s0f1np1: Gained carrier Sep 13 00:43:04.859692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:43:04.859829 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:43:04.859919 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:43:04.860743 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:43:04.865660 systemd-networkd[1605]: bond0: Link UP Sep 13 00:43:04.865934 systemd-networkd[1605]: bond0: Gained carrier Sep 13 00:43:04.872046 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:43:04.872169 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:43:04.884788 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:43:04.884873 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:43:04.895759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:43:04.895845 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:43:04.906749 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:43:04.906819 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:43:04.917445 systemd[1]: Finished ensure-sysext.service. Sep 13 00:43:04.926997 systemd[1]: Reached target network.target - Network. Sep 13 00:43:04.942781 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:43:04.953496 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Sep 13 00:43:04.953524 kernel: bond0: active interface up! Sep 13 00:43:04.977555 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:43:04.977585 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:43:04.988584 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:43:05.023087 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:43:05.034613 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:43:05.044592 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:43:05.055556 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:43:05.074549 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:43:05.081496 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 13 00:43:05.093537 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:43:05.093551 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:43:05.101554 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:43:05.111609 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:43:05.121580 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:43:05.132538 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:43:05.142152 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:43:05.153226 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:43:05.163110 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:43:05.173829 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:43:05.183601 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:43:05.194542 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:43:05.203561 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:43:05.203576 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:43:05.210556 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:43:05.222239 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:43:05.233075 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:43:05.243297 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:43:05.246616 coreos-metadata[1773]: Sep 13 00:43:05.246 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 00:43:05.253263 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:43:05.256156 dbus-daemon[1774]: [system] SELinux support is enabled Sep 13 00:43:05.256908 jq[1777]: false Sep 13 00:43:05.262584 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:43:05.263198 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:43:05.270552 extend-filesystems[1779]: Found loop4 Sep 13 00:43:05.272699 extend-filesystems[1779]: Found loop5 Sep 13 00:43:05.272699 extend-filesystems[1779]: Found loop6 Sep 13 00:43:05.272699 extend-filesystems[1779]: Found loop7 Sep 13 00:43:05.272699 extend-filesystems[1779]: Found sda Sep 13 00:43:05.272699 extend-filesystems[1779]: Found sdb Sep 13 00:43:05.272699 extend-filesystems[1779]: Found sdb1 Sep 13 00:43:05.272699 extend-filesystems[1779]: Found sdb2 Sep 13 00:43:05.272699 extend-filesystems[1779]: Found sdb3 Sep 13 00:43:05.272699 extend-filesystems[1779]: Found usr Sep 13 00:43:05.272699 extend-filesystems[1779]: Found sdb4 Sep 13 00:43:05.272699 extend-filesystems[1779]: Found sdb6 Sep 13 00:43:05.272699 extend-filesystems[1779]: Found sdb7 Sep 13 00:43:05.272699 extend-filesystems[1779]: Found sdb9 Sep 13 00:43:05.272699 extend-filesystems[1779]: Checking size of /dev/sdb9 Sep 13 00:43:05.456613 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Sep 13 00:43:05.456644 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 36 scanned by (udev-worker) (1479) Sep 13 00:43:05.456665 extend-filesystems[1779]: Resized partition /dev/sdb9 Sep 13 00:43:05.273292 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:43:05.479665 extend-filesystems[1787]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:43:05.317904 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:43:05.358573 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:43:05.363144 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:43:05.400555 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Sep 13 00:43:05.497905 sshd_keygen[1802]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:43:05.403861 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:43:05.498030 update_engine[1804]: I20250913 00:43:05.446897 1804 main.cc:92] Flatcar Update Engine starting Sep 13 00:43:05.498030 update_engine[1804]: I20250913 00:43:05.447559 1804 update_check_scheduler.cc:74] Next update check in 6m42s Sep 13 00:43:05.404250 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:43:05.507685 jq[1805]: true Sep 13 00:43:05.421513 systemd-logind[1799]: Watching system buttons on /dev/input/event3 (Power Button) Sep 13 00:43:05.421523 systemd-logind[1799]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 13 00:43:05.421533 systemd-logind[1799]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Sep 13 00:43:05.421699 systemd-logind[1799]: New seat seat0. Sep 13 00:43:05.439532 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:43:05.441777 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:43:05.472918 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:43:05.507793 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:43:05.507895 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:43:05.508086 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:43:05.508177 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:43:05.518024 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:43:05.518115 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:43:05.529707 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:43:05.542518 (ntainerd)[1816]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:43:05.544800 jq[1815]: true Sep 13 00:43:05.545760 dbus-daemon[1774]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 00:43:05.546809 tar[1814]: linux-amd64/helm Sep 13 00:43:05.550742 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Sep 13 00:43:05.550842 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Sep 13 00:43:05.555047 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:43:05.576644 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:43:05.585549 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:43:05.585666 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:43:05.596593 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:43:05.596672 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:43:05.600211 bash[1845]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:43:05.627629 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:43:05.639785 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:43:05.644398 locksmithd[1852]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:43:05.650803 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:43:05.650895 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:43:05.672669 systemd[1]: Starting sshkeys.service... Sep 13 00:43:05.680187 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:43:05.693420 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:43:05.705302 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:43:05.716861 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:43:05.725959 containerd[1816]: time="2025-09-13T00:43:05.725901102Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:43:05.727896 coreos-metadata[1866]: Sep 13 00:43:05.727 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 13 00:43:05.738643 containerd[1816]: time="2025-09-13T00:43:05.738621558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:43:05.739350 containerd[1816]: time="2025-09-13T00:43:05.739330679Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:43:05.739370 containerd[1816]: time="2025-09-13T00:43:05.739351090Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:43:05.739370 containerd[1816]: time="2025-09-13T00:43:05.739360978Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:43:05.739468 containerd[1816]: time="2025-09-13T00:43:05.739455481Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:43:05.739490 containerd[1816]: time="2025-09-13T00:43:05.739475173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:43:05.739520 containerd[1816]: time="2025-09-13T00:43:05.739511966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:43:05.739536 containerd[1816]: time="2025-09-13T00:43:05.739520652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:43:05.739625 containerd[1816]: time="2025-09-13T00:43:05.739616200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:43:05.739642 containerd[1816]: time="2025-09-13T00:43:05.739625702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:43:05.739642 containerd[1816]: time="2025-09-13T00:43:05.739633966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:43:05.739642 containerd[1816]: time="2025-09-13T00:43:05.739639743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:43:05.739689 containerd[1816]: time="2025-09-13T00:43:05.739682362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:43:05.739816 containerd[1816]: time="2025-09-13T00:43:05.739808526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:43:05.739877 containerd[1816]: time="2025-09-13T00:43:05.739868931Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:43:05.739894 containerd[1816]: time="2025-09-13T00:43:05.739878454Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:43:05.739928 containerd[1816]: time="2025-09-13T00:43:05.739921668Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:43:05.739956 containerd[1816]: time="2025-09-13T00:43:05.739950070Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:43:05.745720 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:43:05.750837 containerd[1816]: time="2025-09-13T00:43:05.750809784Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:43:05.750837 containerd[1816]: time="2025-09-13T00:43:05.750834328Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:43:05.750881 containerd[1816]: time="2025-09-13T00:43:05.750845268Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:43:05.750881 containerd[1816]: time="2025-09-13T00:43:05.750854345Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:43:05.750881 containerd[1816]: time="2025-09-13T00:43:05.750866876Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:43:05.750944 containerd[1816]: time="2025-09-13T00:43:05.750936559Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:43:05.751101 containerd[1816]: time="2025-09-13T00:43:05.751094120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:43:05.751157 containerd[1816]: time="2025-09-13T00:43:05.751149483Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:43:05.751177 containerd[1816]: time="2025-09-13T00:43:05.751159848Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:43:05.751177 containerd[1816]: time="2025-09-13T00:43:05.751167393Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:43:05.751177 containerd[1816]: time="2025-09-13T00:43:05.751175307Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:43:05.751216 containerd[1816]: time="2025-09-13T00:43:05.751183051Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:43:05.751216 containerd[1816]: time="2025-09-13T00:43:05.751190124Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:43:05.751216 containerd[1816]: time="2025-09-13T00:43:05.751198090Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:43:05.751216 containerd[1816]: time="2025-09-13T00:43:05.751206197Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:43:05.751216 containerd[1816]: time="2025-09-13T00:43:05.751213450Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:43:05.751288 containerd[1816]: time="2025-09-13T00:43:05.751220551Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:43:05.751288 containerd[1816]: time="2025-09-13T00:43:05.751230285Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:43:05.751288 containerd[1816]: time="2025-09-13T00:43:05.751243872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751288 containerd[1816]: time="2025-09-13T00:43:05.751254243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751288 containerd[1816]: time="2025-09-13T00:43:05.751263965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751288 containerd[1816]: time="2025-09-13T00:43:05.751271697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751288 containerd[1816]: time="2025-09-13T00:43:05.751278630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751288 containerd[1816]: time="2025-09-13T00:43:05.751287296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751399 containerd[1816]: time="2025-09-13T00:43:05.751294427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751399 containerd[1816]: time="2025-09-13T00:43:05.751301613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751399 containerd[1816]: time="2025-09-13T00:43:05.751309020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751399 containerd[1816]: time="2025-09-13T00:43:05.751318173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751399 containerd[1816]: time="2025-09-13T00:43:05.751324699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751399 containerd[1816]: time="2025-09-13T00:43:05.751331066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751399 containerd[1816]: time="2025-09-13T00:43:05.751339168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751399 containerd[1816]: time="2025-09-13T00:43:05.751348342Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:43:05.751399 containerd[1816]: time="2025-09-13T00:43:05.751360049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751399 containerd[1816]: time="2025-09-13T00:43:05.751366740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751399 containerd[1816]: time="2025-09-13T00:43:05.751372750Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:43:05.751399 containerd[1816]: time="2025-09-13T00:43:05.751397042Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:43:05.751559 containerd[1816]: time="2025-09-13T00:43:05.751409920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:43:05.751559 containerd[1816]: time="2025-09-13T00:43:05.751416623Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:43:05.751559 containerd[1816]: time="2025-09-13T00:43:05.751423321Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:43:05.751559 containerd[1816]: time="2025-09-13T00:43:05.751428802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751559 containerd[1816]: time="2025-09-13T00:43:05.751435797Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:43:05.751559 containerd[1816]: time="2025-09-13T00:43:05.751445884Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:43:05.751559 containerd[1816]: time="2025-09-13T00:43:05.751452284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:43:05.751660 containerd[1816]: time="2025-09-13T00:43:05.751623979Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:43:05.751660 containerd[1816]: time="2025-09-13T00:43:05.751659344Z" level=info msg="Connect containerd service" Sep 13 00:43:05.751748 containerd[1816]: time="2025-09-13T00:43:05.751677349Z" level=info msg="using legacy CRI server" Sep 13 00:43:05.751748 containerd[1816]: time="2025-09-13T00:43:05.751682035Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:43:05.751748 containerd[1816]: time="2025-09-13T00:43:05.751733232Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:43:05.752046 containerd[1816]: time="2025-09-13T00:43:05.752035192Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:43:05.752146 containerd[1816]: time="2025-09-13T00:43:05.752124201Z" level=info msg="Start subscribing containerd event" Sep 13 00:43:05.752168 containerd[1816]: time="2025-09-13T00:43:05.752158163Z" level=info msg="Start recovering state" Sep 13 00:43:05.752207 containerd[1816]: time="2025-09-13T00:43:05.752200542Z" level=info msg="Start event monitor" Sep 13 00:43:05.752224 containerd[1816]: time="2025-09-13T00:43:05.752204600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:43:05.752241 containerd[1816]: time="2025-09-13T00:43:05.752209401Z" level=info msg="Start snapshots syncer" Sep 13 00:43:05.752241 containerd[1816]: time="2025-09-13T00:43:05.752235389Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:43:05.752269 containerd[1816]: time="2025-09-13T00:43:05.752242972Z" level=info msg="Start streaming server" Sep 13 00:43:05.752283 containerd[1816]: time="2025-09-13T00:43:05.752236301Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:43:05.752315 containerd[1816]: time="2025-09-13T00:43:05.752309232Z" level=info msg="containerd successfully booted in 0.027241s" Sep 13 00:43:05.754635 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Sep 13 00:43:05.764755 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:43:05.772984 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:43:05.810298 tar[1814]: linux-amd64/LICENSE Sep 13 00:43:05.810353 tar[1814]: linux-amd64/README.md Sep 13 00:43:05.823568 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:43:05.838466 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Sep 13 00:43:05.863237 extend-filesystems[1787]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Sep 13 00:43:05.863237 extend-filesystems[1787]: old_desc_blocks = 1, new_desc_blocks = 56 Sep 13 00:43:05.863237 extend-filesystems[1787]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Sep 13 00:43:05.896598 extend-filesystems[1779]: Resized filesystem in /dev/sdb9 Sep 13 00:43:05.863965 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:43:05.864055 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:43:06.772563 systemd-networkd[1605]: bond0: Gained IPv6LL Sep 13 00:43:06.774256 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:43:06.786020 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:43:06.811681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:43:06.822187 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:43:06.840330 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:43:07.468881 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Sep 13 00:43:07.469341 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Sep 13 00:43:07.606367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:43:07.619054 (kubelet)[1910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:43:07.915466 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:43:07.941705 systemd[1]: Started sshd@0-139.178.94.199:22-139.178.89.65:47112.service - OpenSSH per-connection server daemon (139.178.89.65:47112). Sep 13 00:43:07.984877 sshd[1921]: Accepted publickey for core from 139.178.89.65 port 47112 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:43:07.985659 sshd[1921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:43:07.991125 systemd-logind[1799]: New session 1 of user core. Sep 13 00:43:07.992023 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:43:08.015726 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:43:08.028933 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:43:08.054812 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:43:08.065316 (systemd)[1928]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:43:08.082419 kubelet[1910]: E0913 00:43:08.082397 1910 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:43:08.083858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:43:08.083939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:43:08.141874 systemd[1928]: Queued start job for default target default.target. Sep 13 00:43:08.159180 systemd[1928]: Created slice app.slice - User Application Slice. Sep 13 00:43:08.159200 systemd[1928]: Reached target paths.target - Paths. Sep 13 00:43:08.159215 systemd[1928]: Reached target timers.target - Timers. Sep 13 00:43:08.159950 systemd[1928]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:43:08.165650 systemd[1928]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:43:08.165686 systemd[1928]: Reached target sockets.target - Sockets. Sep 13 00:43:08.165702 systemd[1928]: Reached target basic.target - Basic System. Sep 13 00:43:08.165731 systemd[1928]: Reached target default.target - Main User Target. Sep 13 00:43:08.165754 systemd[1928]: Startup finished in 96ms. Sep 13 00:43:08.165782 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:43:08.176500 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:43:08.254772 systemd[1]: Started sshd@1-139.178.94.199:22-139.178.89.65:47126.service - OpenSSH per-connection server daemon (139.178.89.65:47126). Sep 13 00:43:08.294830 sshd[1942]: Accepted publickey for core from 139.178.89.65 port 47126 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:43:08.295507 sshd[1942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:43:08.298056 systemd-logind[1799]: New session 2 of user core. Sep 13 00:43:08.306656 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:43:08.369762 sshd[1942]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:08.383322 systemd[1]: sshd@1-139.178.94.199:22-139.178.89.65:47126.service: Deactivated successfully. Sep 13 00:43:08.384151 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:43:08.384923 systemd-logind[1799]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:43:08.385626 systemd[1]: Started sshd@2-139.178.94.199:22-139.178.89.65:47130.service - OpenSSH per-connection server daemon (139.178.89.65:47130). Sep 13 00:43:08.397490 systemd-logind[1799]: Removed session 2. Sep 13 00:43:08.427703 sshd[1949]: Accepted publickey for core from 139.178.89.65 port 47130 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:43:08.428595 sshd[1949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:43:08.431823 systemd-logind[1799]: New session 3 of user core. Sep 13 00:43:08.445774 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:43:08.513026 sshd[1949]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:08.514460 systemd[1]: sshd@2-139.178.94.199:22-139.178.89.65:47130.service: Deactivated successfully. Sep 13 00:43:08.515301 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:43:08.515980 systemd-logind[1799]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:43:08.516541 systemd-logind[1799]: Removed session 3. Sep 13 00:43:10.103318 systemd-resolved[1737]: Clock change detected. Flushing caches. Sep 13 00:43:10.103530 systemd-timesyncd[1768]: Contacted time server 134.215.155.177:123 (0.flatcar.pool.ntp.org). Sep 13 00:43:10.103665 systemd-timesyncd[1768]: Initial clock synchronization to Sat 2025-09-13 00:43:10.103162 UTC. Sep 13 00:43:10.374969 login[1877]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:43:10.375261 login[1884]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 00:43:10.377903 systemd-logind[1799]: New session 5 of user core. Sep 13 00:43:10.378750 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:43:10.380054 systemd-logind[1799]: New session 4 of user core. Sep 13 00:43:10.380956 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:43:10.937816 coreos-metadata[1773]: Sep 13 00:43:10.937 INFO Fetch successful Sep 13 00:43:10.975815 coreos-metadata[1866]: Sep 13 00:43:10.975 INFO Fetch successful Sep 13 00:43:10.993671 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:43:10.994852 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Sep 13 00:43:11.011916 unknown[1866]: wrote ssh authorized keys file for user: core Sep 13 00:43:11.043467 update-ssh-keys[1988]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:43:11.043748 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:43:11.044501 systemd[1]: Finished sshkeys.service. Sep 13 00:43:11.418899 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Sep 13 00:43:11.420291 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:43:11.420773 systemd[1]: Startup finished in 1.776s (kernel) + 24.773s (initrd) + 12.186s (userspace) = 38.736s. Sep 13 00:43:17.857316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:43:17.874296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:43:18.120308 systemd[1]: Started sshd@3-139.178.94.199:22-139.178.89.65:59414.service - OpenSSH per-connection server daemon (139.178.89.65:59414). Sep 13 00:43:18.133266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:43:18.136813 (kubelet)[2003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:43:18.158157 sshd[1996]: Accepted publickey for core from 139.178.89.65 port 59414 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:43:18.159154 sshd[1996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:43:18.160654 kubelet[2003]: E0913 00:43:18.160626 2003 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:43:18.162154 systemd-logind[1799]: New session 6 of user core. Sep 13 00:43:18.163097 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:43:18.163286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:43:18.163388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:43:18.212260 sshd[1996]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:18.235191 systemd[1]: sshd@3-139.178.94.199:22-139.178.89.65:59414.service: Deactivated successfully. Sep 13 00:43:18.238962 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:43:18.242547 systemd-logind[1799]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:43:18.261776 systemd[1]: Started sshd@4-139.178.94.199:22-139.178.89.65:59418.service - OpenSSH per-connection server daemon (139.178.89.65:59418). Sep 13 00:43:18.264196 systemd-logind[1799]: Removed session 6. Sep 13 00:43:18.317452 sshd[2026]: Accepted publickey for core from 139.178.89.65 port 59418 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:43:18.318122 sshd[2026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:43:18.320646 systemd-logind[1799]: New session 7 of user core. Sep 13 00:43:18.337268 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:43:18.388927 sshd[2026]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:18.411643 systemd[1]: sshd@4-139.178.94.199:22-139.178.89.65:59418.service: Deactivated successfully. Sep 13 00:43:18.412340 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:43:18.412943 systemd-logind[1799]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:43:18.413641 systemd[1]: Started sshd@5-139.178.94.199:22-139.178.89.65:59428.service - OpenSSH per-connection server daemon (139.178.89.65:59428). Sep 13 00:43:18.414007 systemd-logind[1799]: Removed session 7. Sep 13 00:43:18.445138 sshd[2033]: Accepted publickey for core from 139.178.89.65 port 59428 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:43:18.445997 sshd[2033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:43:18.449381 systemd-logind[1799]: New session 8 of user core. Sep 13 00:43:18.463322 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:43:18.529233 sshd[2033]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:18.544947 systemd[1]: sshd@5-139.178.94.199:22-139.178.89.65:59428.service: Deactivated successfully. Sep 13 00:43:18.548656 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:43:18.552135 systemd-logind[1799]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:43:18.571787 systemd[1]: Started sshd@6-139.178.94.199:22-139.178.89.65:59432.service - OpenSSH per-connection server daemon (139.178.89.65:59432). Sep 13 00:43:18.574346 systemd-logind[1799]: Removed session 8. Sep 13 00:43:18.630664 sshd[2040]: Accepted publickey for core from 139.178.89.65 port 59432 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:43:18.632463 sshd[2040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:43:18.638512 systemd-logind[1799]: New session 9 of user core. Sep 13 00:43:18.650414 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:43:18.720872 sudo[2043]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:43:18.721040 sudo[2043]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:43:18.731726 sudo[2043]: pam_unix(sudo:session): session closed for user root Sep 13 00:43:18.732759 sshd[2040]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:18.749996 systemd[1]: sshd@6-139.178.94.199:22-139.178.89.65:59432.service: Deactivated successfully. Sep 13 00:43:18.753646 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:43:18.757121 systemd-logind[1799]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:43:18.779946 systemd[1]: Started sshd@7-139.178.94.199:22-139.178.89.65:59442.service - OpenSSH per-connection server daemon (139.178.89.65:59442). Sep 13 00:43:18.782830 systemd-logind[1799]: Removed session 9. Sep 13 00:43:18.837256 sshd[2048]: Accepted publickey for core from 139.178.89.65 port 59442 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:43:18.839127 sshd[2048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:43:18.845090 systemd-logind[1799]: New session 10 of user core. Sep 13 00:43:18.855374 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:43:18.915754 sudo[2052]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:43:18.915901 sudo[2052]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:43:18.917937 sudo[2052]: pam_unix(sudo:session): session closed for user root Sep 13 00:43:18.920588 sudo[2051]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:43:18.920735 sudo[2051]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:43:18.938349 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:43:18.939468 auditctl[2055]: No rules Sep 13 00:43:18.939694 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:43:18.939817 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:43:18.941370 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:43:18.965563 augenrules[2073]: No rules Sep 13 00:43:18.966380 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:43:18.967500 sudo[2051]: pam_unix(sudo:session): session closed for user root Sep 13 00:43:18.969459 sshd[2048]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:18.983141 systemd[1]: sshd@7-139.178.94.199:22-139.178.89.65:59442.service: Deactivated successfully. Sep 13 00:43:18.986419 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:43:18.989568 systemd-logind[1799]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:43:19.006945 systemd[1]: Started sshd@8-139.178.94.199:22-139.178.89.65:59454.service - OpenSSH per-connection server daemon (139.178.89.65:59454). Sep 13 00:43:19.008945 systemd-logind[1799]: Removed session 10. Sep 13 00:43:19.061537 sshd[2081]: Accepted publickey for core from 139.178.89.65 port 59454 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:43:19.063248 sshd[2081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:43:19.068912 systemd-logind[1799]: New session 11 of user core. Sep 13 00:43:19.085651 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:43:19.145649 sudo[2085]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:43:19.145803 sudo[2085]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:43:19.408372 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:43:19.408428 (dockerd)[2110]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:43:19.667256 dockerd[2110]: time="2025-09-13T00:43:19.667169549Z" level=info msg="Starting up" Sep 13 00:43:19.736456 dockerd[2110]: time="2025-09-13T00:43:19.736410269Z" level=info msg="Loading containers: start." Sep 13 00:43:19.836047 kernel: Initializing XFRM netlink socket Sep 13 00:43:19.903827 systemd-networkd[1605]: docker0: Link UP Sep 13 00:43:19.926162 dockerd[2110]: time="2025-09-13T00:43:19.926085895Z" level=info msg="Loading containers: done." Sep 13 00:43:19.936699 dockerd[2110]: time="2025-09-13T00:43:19.936651717Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:43:19.936699 dockerd[2110]: time="2025-09-13T00:43:19.936696388Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:43:19.936784 dockerd[2110]: time="2025-09-13T00:43:19.936746551Z" level=info msg="Daemon has completed initialization" Sep 13 00:43:19.936708 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4246100648-merged.mount: Deactivated successfully. Sep 13 00:43:19.951642 dockerd[2110]: time="2025-09-13T00:43:19.951578006Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:43:19.951711 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:43:20.804040 containerd[1816]: time="2025-09-13T00:43:20.804019996Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:43:21.366751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1073420022.mount: Deactivated successfully. Sep 13 00:43:22.137627 containerd[1816]: time="2025-09-13T00:43:22.137572937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:22.137926 containerd[1816]: time="2025-09-13T00:43:22.137826981Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 13 00:43:22.138872 containerd[1816]: time="2025-09-13T00:43:22.138831174Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:22.140485 containerd[1816]: time="2025-09-13T00:43:22.140444012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:22.141157 containerd[1816]: time="2025-09-13T00:43:22.141115194Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.33707432s" Sep 13 00:43:22.141157 containerd[1816]: time="2025-09-13T00:43:22.141135006Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:43:22.141558 containerd[1816]: time="2025-09-13T00:43:22.141513736Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:43:23.068467 containerd[1816]: time="2025-09-13T00:43:23.068414089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:23.068706 containerd[1816]: time="2025-09-13T00:43:23.068661945Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 13 00:43:23.069060 containerd[1816]: time="2025-09-13T00:43:23.069020178Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:23.070663 containerd[1816]: time="2025-09-13T00:43:23.070622304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:23.071334 containerd[1816]: time="2025-09-13T00:43:23.071291979Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 929.748502ms" Sep 13 00:43:23.071334 containerd[1816]: time="2025-09-13T00:43:23.071308811Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:43:23.071585 containerd[1816]: time="2025-09-13T00:43:23.071541358Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:43:23.940428 containerd[1816]: time="2025-09-13T00:43:23.940373495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:23.940639 containerd[1816]: time="2025-09-13T00:43:23.940611550Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 13 00:43:23.940973 containerd[1816]: time="2025-09-13T00:43:23.940934904Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:23.942632 containerd[1816]: time="2025-09-13T00:43:23.942591450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:23.943532 containerd[1816]: time="2025-09-13T00:43:23.943247945Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 871.690749ms" Sep 13 00:43:23.943567 containerd[1816]: time="2025-09-13T00:43:23.943532264Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:43:23.944370 containerd[1816]: time="2025-09-13T00:43:23.944356551Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:43:24.831234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2904472823.mount: Deactivated successfully. Sep 13 00:43:25.051194 containerd[1816]: time="2025-09-13T00:43:25.051167094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:25.051568 containerd[1816]: time="2025-09-13T00:43:25.051524742Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 13 00:43:25.052270 containerd[1816]: time="2025-09-13T00:43:25.052254126Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:25.053225 containerd[1816]: time="2025-09-13T00:43:25.053213587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:25.053675 containerd[1816]: time="2025-09-13T00:43:25.053665987Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.10929275s" Sep 13 00:43:25.053695 containerd[1816]: time="2025-09-13T00:43:25.053678926Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:43:25.053941 containerd[1816]: time="2025-09-13T00:43:25.053930128Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:43:25.557111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount604707594.mount: Deactivated successfully. Sep 13 00:43:26.080284 containerd[1816]: time="2025-09-13T00:43:26.080228423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:26.080516 containerd[1816]: time="2025-09-13T00:43:26.080356861Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:43:26.080934 containerd[1816]: time="2025-09-13T00:43:26.080914799Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:26.082731 containerd[1816]: time="2025-09-13T00:43:26.082715653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:26.084735 containerd[1816]: time="2025-09-13T00:43:26.084717457Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.03077053s" Sep 13 00:43:26.084770 containerd[1816]: time="2025-09-13T00:43:26.084738007Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:43:26.084983 containerd[1816]: time="2025-09-13T00:43:26.084957760Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:43:26.550230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2358536724.mount: Deactivated successfully. Sep 13 00:43:26.551463 containerd[1816]: time="2025-09-13T00:43:26.551418429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:26.551633 containerd[1816]: time="2025-09-13T00:43:26.551578920Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:43:26.551973 containerd[1816]: time="2025-09-13T00:43:26.551936206Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:26.553211 containerd[1816]: time="2025-09-13T00:43:26.553170958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:26.553728 containerd[1816]: time="2025-09-13T00:43:26.553687437Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 468.715151ms" Sep 13 00:43:26.553728 containerd[1816]: time="2025-09-13T00:43:26.553702406Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:43:26.554020 containerd[1816]: time="2025-09-13T00:43:26.553967241Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:43:27.062327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount13881617.mount: Deactivated successfully. Sep 13 00:43:28.096207 containerd[1816]: time="2025-09-13T00:43:28.096179843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:28.096416 containerd[1816]: time="2025-09-13T00:43:28.096377805Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 13 00:43:28.096936 containerd[1816]: time="2025-09-13T00:43:28.096899294Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:28.098738 containerd[1816]: time="2025-09-13T00:43:28.098697486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:28.099510 containerd[1816]: time="2025-09-13T00:43:28.099453244Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.545469211s" Sep 13 00:43:28.099510 containerd[1816]: time="2025-09-13T00:43:28.099490117Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:43:28.356861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:43:28.372472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:43:28.664062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:43:28.666416 (kubelet)[2498]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:43:28.686256 kubelet[2498]: E0913 00:43:28.686234 2498 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:43:28.687587 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:43:28.687677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:43:29.892486 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:43:29.907297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:43:29.924630 systemd[1]: Reloading requested from client PID 2548 ('systemctl') (unit session-11.scope)... Sep 13 00:43:29.924637 systemd[1]: Reloading... Sep 13 00:43:29.972106 zram_generator::config[2587]: No configuration found. Sep 13 00:43:30.044408 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:43:30.104557 systemd[1]: Reloading finished in 179 ms. Sep 13 00:43:30.136974 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:43:30.137023 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:43:30.137168 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:43:30.151358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:43:30.407964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:43:30.410402 (kubelet)[2651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:43:30.430015 kubelet[2651]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:43:30.430015 kubelet[2651]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:43:30.430015 kubelet[2651]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:43:30.430271 kubelet[2651]: I0913 00:43:30.430021 2651 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:43:30.552844 kubelet[2651]: I0913 00:43:30.552799 2651 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:43:30.552844 kubelet[2651]: I0913 00:43:30.552812 2651 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:43:30.552947 kubelet[2651]: I0913 00:43:30.552941 2651 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:43:30.572825 kubelet[2651]: E0913 00:43:30.572783 2651 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.94.199:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:43:30.573421 kubelet[2651]: I0913 00:43:30.573380 2651 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:43:30.579190 kubelet[2651]: E0913 00:43:30.579141 2651 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:43:30.579190 kubelet[2651]: I0913 00:43:30.579163 2651 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:43:30.588511 kubelet[2651]: I0913 00:43:30.588480 2651 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:43:30.589130 kubelet[2651]: I0913 00:43:30.589093 2651 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:43:30.589168 kubelet[2651]: I0913 00:43:30.589153 2651 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:43:30.589285 kubelet[2651]: I0913 00:43:30.589168 2651 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-2af8d06a22","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:43:30.589285 kubelet[2651]: I0913 00:43:30.589264 2651 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:43:30.589285 kubelet[2651]: I0913 00:43:30.589270 2651 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:43:30.589383 kubelet[2651]: I0913 00:43:30.589326 2651 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:43:30.591981 kubelet[2651]: I0913 00:43:30.591936 2651 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:43:30.591981 kubelet[2651]: I0913 00:43:30.591948 2651 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:43:30.591981 kubelet[2651]: I0913 00:43:30.591966 2651 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:43:30.591981 kubelet[2651]: I0913 00:43:30.591998 2651 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:43:30.594379 kubelet[2651]: I0913 00:43:30.594349 2651 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:43:30.594713 kubelet[2651]: I0913 00:43:30.594642 2651 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:43:30.595571 kubelet[2651]: W0913 00:43:30.595530 2651 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:43:30.597279 kubelet[2651]: I0913 00:43:30.597270 2651 server.go:1274] "Started kubelet" Sep 13 00:43:30.597387 kubelet[2651]: I0913 00:43:30.597329 2651 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:43:30.597426 kubelet[2651]: I0913 00:43:30.597382 2651 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:43:30.599473 kubelet[2651]: I0913 00:43:30.599456 2651 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:43:30.599473 kubelet[2651]: W0913 00:43:30.599442 2651 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.94.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Sep 13 00:43:30.599548 kubelet[2651]: E0913 00:43:30.599503 2651 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.94.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:43:30.599634 kubelet[2651]: W0913 00:43:30.599605 2651 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.94.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-2af8d06a22&limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Sep 13 00:43:30.599668 kubelet[2651]: E0913 00:43:30.599648 2651 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.94.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-2af8d06a22&limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:43:30.600264 kubelet[2651]: I0913 00:43:30.600225 2651 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:43:30.600264 kubelet[2651]: I0913 00:43:30.600243 2651 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:43:30.600344 kubelet[2651]: I0913 00:43:30.600299 2651 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:43:30.600344 kubelet[2651]: E0913 00:43:30.600292 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:30.600344 kubelet[2651]: I0913 00:43:30.600312 2651 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:43:30.600344 kubelet[2651]: I0913 00:43:30.600342 2651 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:43:30.600527 kubelet[2651]: E0913 00:43:30.600497 2651 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-2af8d06a22?timeout=10s\": dial tcp 139.178.94.199:6443: connect: connection refused" interval="200ms" Sep 13 00:43:30.600564 kubelet[2651]: W0913 00:43:30.600521 2651 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.94.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Sep 13 00:43:30.600595 kubelet[2651]: E0913 00:43:30.600558 2651 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.94.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:43:30.600616 kubelet[2651]: I0913 00:43:30.600604 2651 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:43:30.600811 kubelet[2651]: I0913 00:43:30.600798 2651 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:43:30.601980 kubelet[2651]: E0913 00:43:30.601970 2651 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:43:30.602828 kubelet[2651]: I0913 00:43:30.602812 2651 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:43:30.603286 kubelet[2651]: I0913 00:43:30.603275 2651 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:43:30.603757 kubelet[2651]: E0913 00:43:30.602900 2651 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.94.199:6443/api/v1/namespaces/default/events\": dial tcp 139.178.94.199:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-2af8d06a22.1864b0db51c0d551 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-2af8d06a22,UID:ci-4081.3.5-n-2af8d06a22,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-2af8d06a22,},FirstTimestamp:2025-09-13 00:43:30.597254481 +0000 UTC m=+0.184833337,LastTimestamp:2025-09-13 00:43:30.597254481 +0000 UTC m=+0.184833337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-2af8d06a22,}" Sep 13 00:43:30.611700 kubelet[2651]: I0913 00:43:30.611683 2651 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:43:30.612238 kubelet[2651]: I0913 00:43:30.612227 2651 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:43:30.612284 kubelet[2651]: I0913 00:43:30.612242 2651 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:43:30.612284 kubelet[2651]: I0913 00:43:30.612255 2651 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:43:30.612326 kubelet[2651]: E0913 00:43:30.612284 2651 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:43:30.612522 kubelet[2651]: W0913 00:43:30.612508 2651 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.94.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Sep 13 00:43:30.612589 kubelet[2651]: E0913 00:43:30.612546 2651 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.94.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:43:30.701816 kubelet[2651]: E0913 00:43:30.701565 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:30.713292 kubelet[2651]: E0913 00:43:30.713207 2651 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:43:30.745918 kubelet[2651]: I0913 00:43:30.745835 2651 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:43:30.745918 kubelet[2651]: I0913 00:43:30.745871 2651 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:43:30.745918 kubelet[2651]: I0913 00:43:30.745912 2651 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:43:30.747909 kubelet[2651]: I0913 00:43:30.747866 2651 policy_none.go:49] "None policy: Start" Sep 13 00:43:30.748376 kubelet[2651]: I0913 00:43:30.748326 2651 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:43:30.748376 kubelet[2651]: I0913 00:43:30.748348 2651 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:43:30.750908 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 00:43:30.762724 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 00:43:30.764725 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 00:43:30.773688 kubelet[2651]: I0913 00:43:30.773640 2651 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:43:30.773765 kubelet[2651]: I0913 00:43:30.773755 2651 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:43:30.773815 kubelet[2651]: I0913 00:43:30.773764 2651 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:43:30.773889 kubelet[2651]: I0913 00:43:30.773879 2651 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:43:30.774631 kubelet[2651]: E0913 00:43:30.774617 2651 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:30.801584 kubelet[2651]: E0913 00:43:30.801508 2651 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-2af8d06a22?timeout=10s\": dial tcp 139.178.94.199:6443: connect: connection refused" interval="400ms" Sep 13 00:43:30.880233 kubelet[2651]: I0913 00:43:30.879794 2651 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:30.881865 kubelet[2651]: E0913 00:43:30.881749 2651 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.94.199:6443/api/v1/nodes\": dial tcp 139.178.94.199:6443: connect: connection refused" node="ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:30.935161 systemd[1]: Created slice kubepods-burstable-podbbadb0b3bb04fa0b279aa0ffcdf4cffd.slice - libcontainer container kubepods-burstable-podbbadb0b3bb04fa0b279aa0ffcdf4cffd.slice. Sep 13 00:43:30.969833 systemd[1]: Created slice kubepods-burstable-pode5973ba2ebda7884ee4242d9e8a226c2.slice - libcontainer container kubepods-burstable-pode5973ba2ebda7884ee4242d9e8a226c2.slice. Sep 13 00:43:30.998889 systemd[1]: Created slice kubepods-burstable-poda26588a350c788c0662b47e7c1240aba.slice - libcontainer container kubepods-burstable-poda26588a350c788c0662b47e7c1240aba.slice. Sep 13 00:43:31.003468 kubelet[2651]: I0913 00:43:31.003357 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bbadb0b3bb04fa0b279aa0ffcdf4cffd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-2af8d06a22\" (UID: \"bbadb0b3bb04fa0b279aa0ffcdf4cffd\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:31.003468 kubelet[2651]: I0913 00:43:31.003445 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5973ba2ebda7884ee4242d9e8a226c2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-2af8d06a22\" (UID: \"e5973ba2ebda7884ee4242d9e8a226c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:31.003757 kubelet[2651]: I0913 00:43:31.003504 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5973ba2ebda7884ee4242d9e8a226c2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-2af8d06a22\" (UID: \"e5973ba2ebda7884ee4242d9e8a226c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:31.003757 kubelet[2651]: I0913 00:43:31.003557 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a26588a350c788c0662b47e7c1240aba-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-2af8d06a22\" (UID: \"a26588a350c788c0662b47e7c1240aba\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:31.003757 kubelet[2651]: I0913 00:43:31.003602 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bbadb0b3bb04fa0b279aa0ffcdf4cffd-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-2af8d06a22\" (UID: \"bbadb0b3bb04fa0b279aa0ffcdf4cffd\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:31.003757 kubelet[2651]: I0913 00:43:31.003644 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5973ba2ebda7884ee4242d9e8a226c2-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-2af8d06a22\" (UID: \"e5973ba2ebda7884ee4242d9e8a226c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:31.003757 kubelet[2651]: I0913 00:43:31.003686 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5973ba2ebda7884ee4242d9e8a226c2-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-2af8d06a22\" (UID: \"e5973ba2ebda7884ee4242d9e8a226c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:31.004202 kubelet[2651]: I0913 00:43:31.003728 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5973ba2ebda7884ee4242d9e8a226c2-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-2af8d06a22\" (UID: \"e5973ba2ebda7884ee4242d9e8a226c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:31.004202 kubelet[2651]: I0913 00:43:31.003773 2651 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bbadb0b3bb04fa0b279aa0ffcdf4cffd-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-2af8d06a22\" (UID: \"bbadb0b3bb04fa0b279aa0ffcdf4cffd\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:31.087341 kubelet[2651]: I0913 00:43:31.087278 2651 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:31.088053 kubelet[2651]: E0913 00:43:31.087964 2651 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.94.199:6443/api/v1/nodes\": dial tcp 139.178.94.199:6443: connect: connection refused" node="ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:31.203186 kubelet[2651]: E0913 00:43:31.203006 2651 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-2af8d06a22?timeout=10s\": dial tcp 139.178.94.199:6443: connect: connection refused" interval="800ms" Sep 13 00:43:31.264477 containerd[1816]: time="2025-09-13T00:43:31.264251877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-2af8d06a22,Uid:bbadb0b3bb04fa0b279aa0ffcdf4cffd,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:31.293835 containerd[1816]: time="2025-09-13T00:43:31.293798744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-2af8d06a22,Uid:e5973ba2ebda7884ee4242d9e8a226c2,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:31.304781 containerd[1816]: time="2025-09-13T00:43:31.304765523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-2af8d06a22,Uid:a26588a350c788c0662b47e7c1240aba,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:31.492654 kubelet[2651]: I0913 00:43:31.492557 2651 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:31.493531 kubelet[2651]: E0913 00:43:31.493256 2651 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.94.199:6443/api/v1/nodes\": dial tcp 139.178.94.199:6443: connect: connection refused" node="ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:31.610605 kubelet[2651]: W0913 00:43:31.610522 2651 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.94.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-2af8d06a22&limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Sep 13 00:43:31.610605 kubelet[2651]: E0913 00:43:31.610587 2651 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.94.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-2af8d06a22&limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:43:31.757198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3903290325.mount: Deactivated successfully. Sep 13 00:43:31.758603 containerd[1816]: time="2025-09-13T00:43:31.758583627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:43:31.758834 containerd[1816]: time="2025-09-13T00:43:31.758813651Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:43:31.759733 containerd[1816]: time="2025-09-13T00:43:31.759698079Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:43:31.760273 containerd[1816]: time="2025-09-13T00:43:31.760230372Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:43:31.760477 containerd[1816]: time="2025-09-13T00:43:31.760431855Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:43:31.760801 containerd[1816]: time="2025-09-13T00:43:31.760761757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:43:31.762484 containerd[1816]: time="2025-09-13T00:43:31.762444680Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:43:31.763720 containerd[1816]: time="2025-09-13T00:43:31.763673031Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 458.875753ms" Sep 13 00:43:31.764113 containerd[1816]: time="2025-09-13T00:43:31.764058054Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 470.184238ms" Sep 13 00:43:31.764503 containerd[1816]: time="2025-09-13T00:43:31.764459791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 500.006551ms" Sep 13 00:43:31.764823 containerd[1816]: time="2025-09-13T00:43:31.764783150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:43:31.866642 kubelet[2651]: W0913 00:43:31.866527 2651 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.94.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Sep 13 00:43:31.866642 kubelet[2651]: E0913 00:43:31.866571 2651 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.94.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:43:31.875966 containerd[1816]: time="2025-09-13T00:43:31.875885996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:31.876176 containerd[1816]: time="2025-09-13T00:43:31.876116720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:31.876176 containerd[1816]: time="2025-09-13T00:43:31.876131803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:31.876257 containerd[1816]: time="2025-09-13T00:43:31.876203949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:31.876636 containerd[1816]: time="2025-09-13T00:43:31.876435464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:31.876670 containerd[1816]: time="2025-09-13T00:43:31.876635697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:31.876670 containerd[1816]: time="2025-09-13T00:43:31.876645174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:31.876730 containerd[1816]: time="2025-09-13T00:43:31.876652080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:31.876730 containerd[1816]: time="2025-09-13T00:43:31.876687241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:31.876730 containerd[1816]: time="2025-09-13T00:43:31.876687025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:31.876730 containerd[1816]: time="2025-09-13T00:43:31.876699856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:31.876866 containerd[1816]: time="2025-09-13T00:43:31.876752934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:31.906501 systemd[1]: Started cri-containerd-419a593c8121116a1e749132aa042486d33d8e343173bd697959f9de02d1eef6.scope - libcontainer container 419a593c8121116a1e749132aa042486d33d8e343173bd697959f9de02d1eef6. Sep 13 00:43:31.909755 systemd[1]: Started cri-containerd-9c7946ea759cea87508e9961728b74c751a255711679730d45ad6641187946dc.scope - libcontainer container 9c7946ea759cea87508e9961728b74c751a255711679730d45ad6641187946dc. Sep 13 00:43:31.913148 systemd[1]: Started cri-containerd-b787431e2437ec7aec92b1ac889c1725d4782292e01a4aa98ec4ba70ff6d0b33.scope - libcontainer container b787431e2437ec7aec92b1ac889c1725d4782292e01a4aa98ec4ba70ff6d0b33. Sep 13 00:43:31.936469 containerd[1816]: time="2025-09-13T00:43:31.936442969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-2af8d06a22,Uid:bbadb0b3bb04fa0b279aa0ffcdf4cffd,Namespace:kube-system,Attempt:0,} returns sandbox id \"419a593c8121116a1e749132aa042486d33d8e343173bd697959f9de02d1eef6\"" Sep 13 00:43:31.936562 containerd[1816]: time="2025-09-13T00:43:31.936449445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-2af8d06a22,Uid:a26588a350c788c0662b47e7c1240aba,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c7946ea759cea87508e9961728b74c751a255711679730d45ad6641187946dc\"" Sep 13 00:43:31.936952 containerd[1816]: time="2025-09-13T00:43:31.936934460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-2af8d06a22,Uid:e5973ba2ebda7884ee4242d9e8a226c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b787431e2437ec7aec92b1ac889c1725d4782292e01a4aa98ec4ba70ff6d0b33\"" Sep 13 00:43:31.938171 containerd[1816]: time="2025-09-13T00:43:31.938149856Z" level=info msg="CreateContainer within sandbox \"9c7946ea759cea87508e9961728b74c751a255711679730d45ad6641187946dc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:43:31.938352 containerd[1816]: time="2025-09-13T00:43:31.938150383Z" level=info msg="CreateContainer within sandbox \"419a593c8121116a1e749132aa042486d33d8e343173bd697959f9de02d1eef6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:43:31.938384 containerd[1816]: time="2025-09-13T00:43:31.938152283Z" level=info msg="CreateContainer within sandbox \"b787431e2437ec7aec92b1ac889c1725d4782292e01a4aa98ec4ba70ff6d0b33\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:43:31.945465 containerd[1816]: time="2025-09-13T00:43:31.945444229Z" level=info msg="CreateContainer within sandbox \"9c7946ea759cea87508e9961728b74c751a255711679730d45ad6641187946dc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a02da15e9ea4b8b475b849d4fb1866f6906074ab392eb188d5cdc438ad9841ff\"" Sep 13 00:43:31.945671 containerd[1816]: time="2025-09-13T00:43:31.945657506Z" level=info msg="CreateContainer within sandbox \"b787431e2437ec7aec92b1ac889c1725d4782292e01a4aa98ec4ba70ff6d0b33\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9a813afe225244495aefbae0b233c0e5abfb48bfb1ee77b695c8f3c21af8866b\"" Sep 13 00:43:31.945745 containerd[1816]: time="2025-09-13T00:43:31.945733519Z" level=info msg="StartContainer for \"a02da15e9ea4b8b475b849d4fb1866f6906074ab392eb188d5cdc438ad9841ff\"" Sep 13 00:43:31.945864 containerd[1816]: time="2025-09-13T00:43:31.945851247Z" level=info msg="StartContainer for \"9a813afe225244495aefbae0b233c0e5abfb48bfb1ee77b695c8f3c21af8866b\"" Sep 13 00:43:31.946306 containerd[1816]: time="2025-09-13T00:43:31.946291355Z" level=info msg="CreateContainer within sandbox \"419a593c8121116a1e749132aa042486d33d8e343173bd697959f9de02d1eef6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d0a7724877a917d350a64de66a0f970b3b246f4cd2d7c6ba532e70587479af31\"" Sep 13 00:43:31.946480 containerd[1816]: time="2025-09-13T00:43:31.946438749Z" level=info msg="StartContainer for \"d0a7724877a917d350a64de66a0f970b3b246f4cd2d7c6ba532e70587479af31\"" Sep 13 00:43:31.970338 systemd[1]: Started cri-containerd-9a813afe225244495aefbae0b233c0e5abfb48bfb1ee77b695c8f3c21af8866b.scope - libcontainer container 9a813afe225244495aefbae0b233c0e5abfb48bfb1ee77b695c8f3c21af8866b. Sep 13 00:43:31.970926 systemd[1]: Started cri-containerd-a02da15e9ea4b8b475b849d4fb1866f6906074ab392eb188d5cdc438ad9841ff.scope - libcontainer container a02da15e9ea4b8b475b849d4fb1866f6906074ab392eb188d5cdc438ad9841ff. Sep 13 00:43:31.971453 systemd[1]: Started cri-containerd-d0a7724877a917d350a64de66a0f970b3b246f4cd2d7c6ba532e70587479af31.scope - libcontainer container d0a7724877a917d350a64de66a0f970b3b246f4cd2d7c6ba532e70587479af31. Sep 13 00:43:31.993497 containerd[1816]: time="2025-09-13T00:43:31.993477469Z" level=info msg="StartContainer for \"a02da15e9ea4b8b475b849d4fb1866f6906074ab392eb188d5cdc438ad9841ff\" returns successfully" Sep 13 00:43:31.995114 containerd[1816]: time="2025-09-13T00:43:31.995091961Z" level=info msg="StartContainer for \"9a813afe225244495aefbae0b233c0e5abfb48bfb1ee77b695c8f3c21af8866b\" returns successfully" Sep 13 00:43:31.995196 containerd[1816]: time="2025-09-13T00:43:31.995091968Z" level=info msg="StartContainer for \"d0a7724877a917d350a64de66a0f970b3b246f4cd2d7c6ba532e70587479af31\" returns successfully" Sep 13 00:43:32.000726 kubelet[2651]: W0913 00:43:32.000681 2651 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.94.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.94.199:6443: connect: connection refused Sep 13 00:43:32.000814 kubelet[2651]: E0913 00:43:32.000738 2651 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.94.199:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.94.199:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:43:32.004194 kubelet[2651]: E0913 00:43:32.004174 2651 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-2af8d06a22?timeout=10s\": dial tcp 139.178.94.199:6443: connect: connection refused" interval="1.6s" Sep 13 00:43:32.295492 kubelet[2651]: I0913 00:43:32.295404 2651 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:32.585019 kubelet[2651]: I0913 00:43:32.584953 2651 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:32.585019 kubelet[2651]: E0913 00:43:32.584977 2651 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.5-n-2af8d06a22\": node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:32.589840 kubelet[2651]: E0913 00:43:32.589802 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:32.690745 kubelet[2651]: E0913 00:43:32.690702 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:32.791517 kubelet[2651]: E0913 00:43:32.791407 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:32.891887 kubelet[2651]: E0913 00:43:32.891765 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:32.992494 kubelet[2651]: E0913 00:43:32.992418 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:33.093065 kubelet[2651]: E0913 00:43:33.092947 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:33.194064 kubelet[2651]: E0913 00:43:33.193785 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:33.295098 kubelet[2651]: E0913 00:43:33.294966 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:33.395330 kubelet[2651]: E0913 00:43:33.395189 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:33.496443 kubelet[2651]: E0913 00:43:33.496250 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:33.597194 kubelet[2651]: E0913 00:43:33.597132 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:33.697932 kubelet[2651]: E0913 00:43:33.697839 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:33.798516 kubelet[2651]: E0913 00:43:33.798281 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:33.899165 kubelet[2651]: E0913 00:43:33.899066 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:34.000186 kubelet[2651]: E0913 00:43:34.000004 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:34.101261 kubelet[2651]: E0913 00:43:34.101047 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:34.201926 kubelet[2651]: E0913 00:43:34.201898 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:34.302617 kubelet[2651]: E0913 00:43:34.302495 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:34.403049 kubelet[2651]: E0913 00:43:34.402902 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:34.503969 kubelet[2651]: E0913 00:43:34.503899 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:34.604523 kubelet[2651]: E0913 00:43:34.604414 2651 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:34.848831 kubelet[2651]: W0913 00:43:34.848752 2651 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:43:34.853475 systemd[1]: Reloading requested from client PID 2966 ('systemctl') (unit session-11.scope)... Sep 13 00:43:34.853482 systemd[1]: Reloading... Sep 13 00:43:34.899076 zram_generator::config[3005]: No configuration found. Sep 13 00:43:34.975250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:43:35.043721 systemd[1]: Reloading finished in 190 ms. Sep 13 00:43:35.075113 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:43:35.090653 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:43:35.090761 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:43:35.105358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:43:35.387980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:43:35.390594 (kubelet)[3069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:43:35.411513 kubelet[3069]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:43:35.411513 kubelet[3069]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:43:35.411513 kubelet[3069]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:43:35.411778 kubelet[3069]: I0913 00:43:35.411519 3069 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:43:35.415129 kubelet[3069]: I0913 00:43:35.415075 3069 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:43:35.415129 kubelet[3069]: I0913 00:43:35.415099 3069 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:43:35.415246 kubelet[3069]: I0913 00:43:35.415238 3069 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:43:35.416214 kubelet[3069]: I0913 00:43:35.416205 3069 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:43:35.420060 kubelet[3069]: I0913 00:43:35.420030 3069 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:43:35.422085 kubelet[3069]: E0913 00:43:35.422020 3069 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:43:35.422085 kubelet[3069]: I0913 00:43:35.422059 3069 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:43:35.428581 kubelet[3069]: I0913 00:43:35.428544 3069 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:43:35.428618 kubelet[3069]: I0913 00:43:35.428597 3069 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:43:35.428692 kubelet[3069]: I0913 00:43:35.428648 3069 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:43:35.428779 kubelet[3069]: I0913 00:43:35.428663 3069 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-2af8d06a22","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:43:35.428779 kubelet[3069]: I0913 00:43:35.428760 3069 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:43:35.428779 kubelet[3069]: I0913 00:43:35.428766 3069 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:43:35.428779 kubelet[3069]: I0913 00:43:35.428780 3069 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:43:35.428887 kubelet[3069]: I0913 00:43:35.428831 3069 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:43:35.428887 kubelet[3069]: I0913 00:43:35.428838 3069 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:43:35.428887 kubelet[3069]: I0913 00:43:35.428853 3069 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:43:35.428887 kubelet[3069]: I0913 00:43:35.428859 3069 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:43:35.429121 kubelet[3069]: I0913 00:43:35.429104 3069 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:43:35.429354 kubelet[3069]: I0913 00:43:35.429345 3069 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:43:35.429592 kubelet[3069]: I0913 00:43:35.429556 3069 server.go:1274] "Started kubelet" Sep 13 00:43:35.429637 kubelet[3069]: I0913 00:43:35.429590 3069 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:43:35.429693 kubelet[3069]: I0913 00:43:35.429591 3069 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:43:35.429809 kubelet[3069]: I0913 00:43:35.429800 3069 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:43:35.431046 kubelet[3069]: I0913 00:43:35.431022 3069 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:43:35.431146 kubelet[3069]: I0913 00:43:35.431048 3069 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:43:35.431146 kubelet[3069]: I0913 00:43:35.431081 3069 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:43:35.431146 kubelet[3069]: I0913 00:43:35.431119 3069 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:43:35.431244 kubelet[3069]: E0913 00:43:35.431222 3069 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-2af8d06a22\" not found" Sep 13 00:43:35.431279 kubelet[3069]: I0913 00:43:35.431257 3069 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:43:35.431279 kubelet[3069]: E0913 00:43:35.431266 3069 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:43:35.431576 kubelet[3069]: I0913 00:43:35.431563 3069 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:43:35.431828 kubelet[3069]: I0913 00:43:35.431815 3069 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:43:35.431906 kubelet[3069]: I0913 00:43:35.431893 3069 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:43:35.432727 kubelet[3069]: I0913 00:43:35.432718 3069 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:43:35.436650 kubelet[3069]: I0913 00:43:35.436566 3069 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:43:35.437665 kubelet[3069]: I0913 00:43:35.437536 3069 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:43:35.437870 kubelet[3069]: I0913 00:43:35.437858 3069 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:43:35.437922 kubelet[3069]: I0913 00:43:35.437885 3069 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:43:35.437955 kubelet[3069]: E0913 00:43:35.437919 3069 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:43:35.447918 kubelet[3069]: I0913 00:43:35.447880 3069 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:43:35.447918 kubelet[3069]: I0913 00:43:35.447889 3069 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:43:35.447918 kubelet[3069]: I0913 00:43:35.447899 3069 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:43:35.448036 kubelet[3069]: I0913 00:43:35.447981 3069 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:43:35.448036 kubelet[3069]: I0913 00:43:35.447988 3069 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:43:35.448036 kubelet[3069]: I0913 00:43:35.448001 3069 policy_none.go:49] "None policy: Start" Sep 13 00:43:35.448328 kubelet[3069]: I0913 00:43:35.448292 3069 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:43:35.448328 kubelet[3069]: I0913 00:43:35.448304 3069 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:43:35.448395 kubelet[3069]: I0913 00:43:35.448384 3069 state_mem.go:75] "Updated machine memory state" Sep 13 00:43:35.450391 kubelet[3069]: I0913 00:43:35.450354 3069 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:43:35.450488 kubelet[3069]: I0913 00:43:35.450448 3069 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:43:35.450488 kubelet[3069]: I0913 00:43:35.450455 3069 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:43:35.450596 kubelet[3069]: I0913 00:43:35.450544 3069 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:43:35.547078 kubelet[3069]: W0913 00:43:35.546952 3069 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:43:35.547385 kubelet[3069]: W0913 00:43:35.547151 3069 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:43:35.547385 kubelet[3069]: W0913 00:43:35.547240 3069 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:43:35.547385 kubelet[3069]: E0913 00:43:35.547306 3069 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.5-n-2af8d06a22\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:35.558134 kubelet[3069]: I0913 00:43:35.558083 3069 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:35.577372 kubelet[3069]: I0913 00:43:35.577333 3069 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:35.577576 kubelet[3069]: I0913 00:43:35.577477 3069 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:35.732443 kubelet[3069]: I0913 00:43:35.732198 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5973ba2ebda7884ee4242d9e8a226c2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-2af8d06a22\" (UID: \"e5973ba2ebda7884ee4242d9e8a226c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:35.732443 kubelet[3069]: I0913 00:43:35.732301 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a26588a350c788c0662b47e7c1240aba-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-2af8d06a22\" (UID: \"a26588a350c788c0662b47e7c1240aba\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:35.732443 kubelet[3069]: I0913 00:43:35.732367 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5973ba2ebda7884ee4242d9e8a226c2-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-2af8d06a22\" (UID: \"e5973ba2ebda7884ee4242d9e8a226c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:35.732443 kubelet[3069]: I0913 00:43:35.732430 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5973ba2ebda7884ee4242d9e8a226c2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-2af8d06a22\" (UID: \"e5973ba2ebda7884ee4242d9e8a226c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:35.732990 kubelet[3069]: I0913 00:43:35.732554 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bbadb0b3bb04fa0b279aa0ffcdf4cffd-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-2af8d06a22\" (UID: \"bbadb0b3bb04fa0b279aa0ffcdf4cffd\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:35.732990 kubelet[3069]: I0913 00:43:35.732645 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bbadb0b3bb04fa0b279aa0ffcdf4cffd-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-2af8d06a22\" (UID: \"bbadb0b3bb04fa0b279aa0ffcdf4cffd\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:35.732990 kubelet[3069]: I0913 00:43:35.732702 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bbadb0b3bb04fa0b279aa0ffcdf4cffd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-2af8d06a22\" (UID: \"bbadb0b3bb04fa0b279aa0ffcdf4cffd\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:35.732990 kubelet[3069]: I0913 00:43:35.732756 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5973ba2ebda7884ee4242d9e8a226c2-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-2af8d06a22\" (UID: \"e5973ba2ebda7884ee4242d9e8a226c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:35.732990 kubelet[3069]: I0913 00:43:35.732805 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5973ba2ebda7884ee4242d9e8a226c2-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-2af8d06a22\" (UID: \"e5973ba2ebda7884ee4242d9e8a226c2\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:36.429132 kubelet[3069]: I0913 00:43:36.429087 3069 apiserver.go:52] "Watching apiserver" Sep 13 00:43:36.431759 kubelet[3069]: I0913 00:43:36.431742 3069 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:43:36.445162 kubelet[3069]: W0913 00:43:36.445118 3069 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:43:36.445162 kubelet[3069]: E0913 00:43:36.445156 3069 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.5-n-2af8d06a22\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-n-2af8d06a22" Sep 13 00:43:36.457560 kubelet[3069]: I0913 00:43:36.457509 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-n-2af8d06a22" podStartSLOduration=2.457497183 podStartE2EDuration="2.457497183s" podCreationTimestamp="2025-09-13 00:43:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:36.453749394 +0000 UTC m=+1.061251469" watchObservedRunningTime="2025-09-13 00:43:36.457497183 +0000 UTC m=+1.064999260" Sep 13 00:43:36.457672 kubelet[3069]: I0913 00:43:36.457587 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-n-2af8d06a22" podStartSLOduration=1.45758368 podStartE2EDuration="1.45758368s" podCreationTimestamp="2025-09-13 00:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:36.457572947 +0000 UTC m=+1.065075018" watchObservedRunningTime="2025-09-13 00:43:36.45758368 +0000 UTC m=+1.065085747" Sep 13 00:43:36.466288 kubelet[3069]: I0913 00:43:36.466265 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-2af8d06a22" podStartSLOduration=1.46625652 podStartE2EDuration="1.46625652s" podCreationTimestamp="2025-09-13 00:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:36.462294895 +0000 UTC m=+1.069796967" watchObservedRunningTime="2025-09-13 00:43:36.46625652 +0000 UTC m=+1.073758590" Sep 13 00:43:41.397872 kubelet[3069]: I0913 00:43:41.397779 3069 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:43:41.399049 containerd[1816]: time="2025-09-13T00:43:41.398602963Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:43:41.399905 kubelet[3069]: I0913 00:43:41.399068 3069 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:43:42.088622 systemd[1]: Created slice kubepods-besteffort-podfc3708c8_66d2_4278_8782_eca2c83de93a.slice - libcontainer container kubepods-besteffort-podfc3708c8_66d2_4278_8782_eca2c83de93a.slice. Sep 13 00:43:42.183265 kubelet[3069]: I0913 00:43:42.183154 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc3708c8-66d2-4278-8782-eca2c83de93a-kube-proxy\") pod \"kube-proxy-n6crz\" (UID: \"fc3708c8-66d2-4278-8782-eca2c83de93a\") " pod="kube-system/kube-proxy-n6crz" Sep 13 00:43:42.183265 kubelet[3069]: I0913 00:43:42.183253 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54d2t\" (UniqueName: \"kubernetes.io/projected/fc3708c8-66d2-4278-8782-eca2c83de93a-kube-api-access-54d2t\") pod \"kube-proxy-n6crz\" (UID: \"fc3708c8-66d2-4278-8782-eca2c83de93a\") " pod="kube-system/kube-proxy-n6crz" Sep 13 00:43:42.183626 kubelet[3069]: I0913 00:43:42.183486 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc3708c8-66d2-4278-8782-eca2c83de93a-xtables-lock\") pod \"kube-proxy-n6crz\" (UID: \"fc3708c8-66d2-4278-8782-eca2c83de93a\") " pod="kube-system/kube-proxy-n6crz" Sep 13 00:43:42.183727 kubelet[3069]: I0913 00:43:42.183640 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc3708c8-66d2-4278-8782-eca2c83de93a-lib-modules\") pod \"kube-proxy-n6crz\" (UID: \"fc3708c8-66d2-4278-8782-eca2c83de93a\") " pod="kube-system/kube-proxy-n6crz" Sep 13 00:43:42.284078 kubelet[3069]: I0913 00:43:42.283962 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tlpr\" (UniqueName: \"kubernetes.io/projected/c0d123ec-978b-4f9f-a953-81599d480592-kube-api-access-9tlpr\") pod \"tigera-operator-58fc44c59b-nzgbq\" (UID: \"c0d123ec-978b-4f9f-a953-81599d480592\") " pod="tigera-operator/tigera-operator-58fc44c59b-nzgbq" Sep 13 00:43:42.284494 kubelet[3069]: I0913 00:43:42.284411 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c0d123ec-978b-4f9f-a953-81599d480592-var-lib-calico\") pod \"tigera-operator-58fc44c59b-nzgbq\" (UID: \"c0d123ec-978b-4f9f-a953-81599d480592\") " pod="tigera-operator/tigera-operator-58fc44c59b-nzgbq" Sep 13 00:43:42.294293 systemd[1]: Created slice kubepods-besteffort-podc0d123ec_978b_4f9f_a953_81599d480592.slice - libcontainer container kubepods-besteffort-podc0d123ec_978b_4f9f_a953_81599d480592.slice. Sep 13 00:43:42.412253 containerd[1816]: time="2025-09-13T00:43:42.412190778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n6crz,Uid:fc3708c8-66d2-4278-8782-eca2c83de93a,Namespace:kube-system,Attempt:0,}" Sep 13 00:43:42.599291 containerd[1816]: time="2025-09-13T00:43:42.599245593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-nzgbq,Uid:c0d123ec-978b-4f9f-a953-81599d480592,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:43:42.793517 containerd[1816]: time="2025-09-13T00:43:42.793095725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:42.793517 containerd[1816]: time="2025-09-13T00:43:42.793445804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:42.793517 containerd[1816]: time="2025-09-13T00:43:42.793470248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:42.793658 containerd[1816]: time="2025-09-13T00:43:42.793552214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:42.818335 systemd[1]: Started cri-containerd-cd35ebd1e84feea0a587daa68145a2a1290f621ed7f7e8b851d21df73a79b82c.scope - libcontainer container cd35ebd1e84feea0a587daa68145a2a1290f621ed7f7e8b851d21df73a79b82c. Sep 13 00:43:42.842441 containerd[1816]: time="2025-09-13T00:43:42.842375524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n6crz,Uid:fc3708c8-66d2-4278-8782-eca2c83de93a,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd35ebd1e84feea0a587daa68145a2a1290f621ed7f7e8b851d21df73a79b82c\"" Sep 13 00:43:42.844769 containerd[1816]: time="2025-09-13T00:43:42.844738207Z" level=info msg="CreateContainer within sandbox \"cd35ebd1e84feea0a587daa68145a2a1290f621ed7f7e8b851d21df73a79b82c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:43:43.095554 containerd[1816]: time="2025-09-13T00:43:43.095448531Z" level=info msg="CreateContainer within sandbox \"cd35ebd1e84feea0a587daa68145a2a1290f621ed7f7e8b851d21df73a79b82c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f28b591c8b89276135f8246fbb33760ffc8c7a0b348c4cdddf4e1036ef63836\"" Sep 13 00:43:43.095881 containerd[1816]: time="2025-09-13T00:43:43.095826209Z" level=info msg="StartContainer for \"3f28b591c8b89276135f8246fbb33760ffc8c7a0b348c4cdddf4e1036ef63836\"" Sep 13 00:43:43.101204 containerd[1816]: time="2025-09-13T00:43:43.101163781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:43.101398 containerd[1816]: time="2025-09-13T00:43:43.101380915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:43.101420 containerd[1816]: time="2025-09-13T00:43:43.101397458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:43.101451 containerd[1816]: time="2025-09-13T00:43:43.101440441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:43.121170 systemd[1]: Started cri-containerd-3f28b591c8b89276135f8246fbb33760ffc8c7a0b348c4cdddf4e1036ef63836.scope - libcontainer container 3f28b591c8b89276135f8246fbb33760ffc8c7a0b348c4cdddf4e1036ef63836. Sep 13 00:43:43.121910 systemd[1]: Started cri-containerd-be9599126c955276464df8f1b0fb4ae1a8ebd68e167fd2ed0ecddae350bcfc17.scope - libcontainer container be9599126c955276464df8f1b0fb4ae1a8ebd68e167fd2ed0ecddae350bcfc17. Sep 13 00:43:43.134998 containerd[1816]: time="2025-09-13T00:43:43.134975906Z" level=info msg="StartContainer for \"3f28b591c8b89276135f8246fbb33760ffc8c7a0b348c4cdddf4e1036ef63836\" returns successfully" Sep 13 00:43:43.145244 containerd[1816]: time="2025-09-13T00:43:43.145218694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-nzgbq,Uid:c0d123ec-978b-4f9f-a953-81599d480592,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"be9599126c955276464df8f1b0fb4ae1a8ebd68e167fd2ed0ecddae350bcfc17\"" Sep 13 00:43:43.145993 containerd[1816]: time="2025-09-13T00:43:43.145977435Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:43:44.139886 kubelet[3069]: I0913 00:43:44.139855 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n6crz" podStartSLOduration=2.139840864 podStartE2EDuration="2.139840864s" podCreationTimestamp="2025-09-13 00:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:43:43.466000251 +0000 UTC m=+8.073502346" watchObservedRunningTime="2025-09-13 00:43:44.139840864 +0000 UTC m=+8.747342933" Sep 13 00:43:44.404495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount588691122.mount: Deactivated successfully. Sep 13 00:43:44.949223 containerd[1816]: time="2025-09-13T00:43:44.949170740Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:44.949442 containerd[1816]: time="2025-09-13T00:43:44.949306288Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:43:44.949756 containerd[1816]: time="2025-09-13T00:43:44.949713573Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:44.950906 containerd[1816]: time="2025-09-13T00:43:44.950864786Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:44.951333 containerd[1816]: time="2025-09-13T00:43:44.951292237Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 1.805294377s" Sep 13 00:43:44.951333 containerd[1816]: time="2025-09-13T00:43:44.951309088Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:43:44.952215 containerd[1816]: time="2025-09-13T00:43:44.952204920Z" level=info msg="CreateContainer within sandbox \"be9599126c955276464df8f1b0fb4ae1a8ebd68e167fd2ed0ecddae350bcfc17\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:43:44.956111 containerd[1816]: time="2025-09-13T00:43:44.956064453Z" level=info msg="CreateContainer within sandbox \"be9599126c955276464df8f1b0fb4ae1a8ebd68e167fd2ed0ecddae350bcfc17\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"79e8e85d7045408fc00e68ed1557fbddccc2a359ab0015a4eb1f1e144ee5c502\"" Sep 13 00:43:44.956333 containerd[1816]: time="2025-09-13T00:43:44.956281524Z" level=info msg="StartContainer for \"79e8e85d7045408fc00e68ed1557fbddccc2a359ab0015a4eb1f1e144ee5c502\"" Sep 13 00:43:44.975321 systemd[1]: Started cri-containerd-79e8e85d7045408fc00e68ed1557fbddccc2a359ab0015a4eb1f1e144ee5c502.scope - libcontainer container 79e8e85d7045408fc00e68ed1557fbddccc2a359ab0015a4eb1f1e144ee5c502. Sep 13 00:43:44.985674 containerd[1816]: time="2025-09-13T00:43:44.985637754Z" level=info msg="StartContainer for \"79e8e85d7045408fc00e68ed1557fbddccc2a359ab0015a4eb1f1e144ee5c502\" returns successfully" Sep 13 00:43:45.488819 kubelet[3069]: I0913 00:43:45.488775 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-nzgbq" podStartSLOduration=1.682848258 podStartE2EDuration="3.488746815s" podCreationTimestamp="2025-09-13 00:43:42 +0000 UTC" firstStartedPulling="2025-09-13 00:43:43.145760706 +0000 UTC m=+7.753262776" lastFinishedPulling="2025-09-13 00:43:44.951659261 +0000 UTC m=+9.559161333" observedRunningTime="2025-09-13 00:43:45.488606775 +0000 UTC m=+10.096108848" watchObservedRunningTime="2025-09-13 00:43:45.488746815 +0000 UTC m=+10.096248883" Sep 13 00:43:46.440636 systemd[1]: cri-containerd-79e8e85d7045408fc00e68ed1557fbddccc2a359ab0015a4eb1f1e144ee5c502.scope: Deactivated successfully. Sep 13 00:43:46.451411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79e8e85d7045408fc00e68ed1557fbddccc2a359ab0015a4eb1f1e144ee5c502-rootfs.mount: Deactivated successfully. Sep 13 00:43:46.653284 containerd[1816]: time="2025-09-13T00:43:46.653246436Z" level=info msg="shim disconnected" id=79e8e85d7045408fc00e68ed1557fbddccc2a359ab0015a4eb1f1e144ee5c502 namespace=k8s.io Sep 13 00:43:46.653284 containerd[1816]: time="2025-09-13T00:43:46.653281887Z" level=warning msg="cleaning up after shim disconnected" id=79e8e85d7045408fc00e68ed1557fbddccc2a359ab0015a4eb1f1e144ee5c502 namespace=k8s.io Sep 13 00:43:46.653530 containerd[1816]: time="2025-09-13T00:43:46.653291015Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:43:47.477355 kubelet[3069]: I0913 00:43:47.477337 3069 scope.go:117] "RemoveContainer" containerID="79e8e85d7045408fc00e68ed1557fbddccc2a359ab0015a4eb1f1e144ee5c502" Sep 13 00:43:47.478204 containerd[1816]: time="2025-09-13T00:43:47.478184794Z" level=info msg="CreateContainer within sandbox \"be9599126c955276464df8f1b0fb4ae1a8ebd68e167fd2ed0ecddae350bcfc17\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 13 00:43:47.482177 containerd[1816]: time="2025-09-13T00:43:47.482156246Z" level=info msg="CreateContainer within sandbox \"be9599126c955276464df8f1b0fb4ae1a8ebd68e167fd2ed0ecddae350bcfc17\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"31691ff8a4de1562e6bbc34cb42539d8064e7d441b443cc71f6de3c0b42f6004\"" Sep 13 00:43:47.482419 containerd[1816]: time="2025-09-13T00:43:47.482406172Z" level=info msg="StartContainer for \"31691ff8a4de1562e6bbc34cb42539d8064e7d441b443cc71f6de3c0b42f6004\"" Sep 13 00:43:47.511144 systemd[1]: Started cri-containerd-31691ff8a4de1562e6bbc34cb42539d8064e7d441b443cc71f6de3c0b42f6004.scope - libcontainer container 31691ff8a4de1562e6bbc34cb42539d8064e7d441b443cc71f6de3c0b42f6004. Sep 13 00:43:47.522897 containerd[1816]: time="2025-09-13T00:43:47.522872505Z" level=info msg="StartContainer for \"31691ff8a4de1562e6bbc34cb42539d8064e7d441b443cc71f6de3c0b42f6004\" returns successfully" Sep 13 00:43:49.720826 sudo[2085]: pam_unix(sudo:session): session closed for user root Sep 13 00:43:49.721730 sshd[2081]: pam_unix(sshd:session): session closed for user core Sep 13 00:43:49.723187 systemd[1]: sshd@8-139.178.94.199:22-139.178.89.65:59454.service: Deactivated successfully. Sep 13 00:43:49.724048 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:43:49.724135 systemd[1]: session-11.scope: Consumed 3.368s CPU time, 169.2M memory peak, 0B memory swap peak. Sep 13 00:43:49.724660 systemd-logind[1799]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:43:49.725129 systemd-logind[1799]: Removed session 11. Sep 13 00:43:50.685139 update_engine[1804]: I20250913 00:43:50.685070 1804 update_attempter.cc:509] Updating boot flags... Sep 13 00:43:50.718025 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 36 scanned by (udev-worker) (3668) Sep 13 00:43:50.746054 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 36 scanned by (udev-worker) (3672) Sep 13 00:43:52.598727 systemd[1]: Created slice kubepods-besteffort-pod6d2ff0d7_b36e_4486_b7c2_605f15dfba7e.slice - libcontainer container kubepods-besteffort-pod6d2ff0d7_b36e_4486_b7c2_605f15dfba7e.slice. Sep 13 00:43:52.757540 kubelet[3069]: I0913 00:43:52.757415 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d2ff0d7-b36e-4486-b7c2-605f15dfba7e-tigera-ca-bundle\") pod \"calico-typha-d5485db55-n2wpn\" (UID: \"6d2ff0d7-b36e-4486-b7c2-605f15dfba7e\") " pod="calico-system/calico-typha-d5485db55-n2wpn" Sep 13 00:43:52.757540 kubelet[3069]: I0913 00:43:52.757518 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2w87\" (UniqueName: \"kubernetes.io/projected/6d2ff0d7-b36e-4486-b7c2-605f15dfba7e-kube-api-access-v2w87\") pod \"calico-typha-d5485db55-n2wpn\" (UID: \"6d2ff0d7-b36e-4486-b7c2-605f15dfba7e\") " pod="calico-system/calico-typha-d5485db55-n2wpn" Sep 13 00:43:52.758582 kubelet[3069]: I0913 00:43:52.757665 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6d2ff0d7-b36e-4486-b7c2-605f15dfba7e-typha-certs\") pod \"calico-typha-d5485db55-n2wpn\" (UID: \"6d2ff0d7-b36e-4486-b7c2-605f15dfba7e\") " pod="calico-system/calico-typha-d5485db55-n2wpn" Sep 13 00:43:52.903443 containerd[1816]: time="2025-09-13T00:43:52.903357279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d5485db55-n2wpn,Uid:6d2ff0d7-b36e-4486-b7c2-605f15dfba7e,Namespace:calico-system,Attempt:0,}" Sep 13 00:43:52.913450 containerd[1816]: time="2025-09-13T00:43:52.913406682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:52.913450 containerd[1816]: time="2025-09-13T00:43:52.913438751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:52.913450 containerd[1816]: time="2025-09-13T00:43:52.913446001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:52.913582 containerd[1816]: time="2025-09-13T00:43:52.913490592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:52.937168 systemd[1]: Started cri-containerd-920f82599901d6c094e2554a91676f1ca9a144599b373845bebae0e3d4a82228.scope - libcontainer container 920f82599901d6c094e2554a91676f1ca9a144599b373845bebae0e3d4a82228. Sep 13 00:43:52.938571 systemd[1]: Created slice kubepods-besteffort-pod05f6c95a_dc13_49e9_9348_0360ae322062.slice - libcontainer container kubepods-besteffort-pod05f6c95a_dc13_49e9_9348_0360ae322062.slice. Sep 13 00:43:52.959466 containerd[1816]: time="2025-09-13T00:43:52.959443007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d5485db55-n2wpn,Uid:6d2ff0d7-b36e-4486-b7c2-605f15dfba7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"920f82599901d6c094e2554a91676f1ca9a144599b373845bebae0e3d4a82228\"" Sep 13 00:43:52.960189 containerd[1816]: time="2025-09-13T00:43:52.960176523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:43:53.059429 kubelet[3069]: I0913 00:43:53.059349 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05f6c95a-dc13-49e9-9348-0360ae322062-tigera-ca-bundle\") pod \"calico-node-4tmh9\" (UID: \"05f6c95a-dc13-49e9-9348-0360ae322062\") " pod="calico-system/calico-node-4tmh9" Sep 13 00:43:53.059683 kubelet[3069]: I0913 00:43:53.059485 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/05f6c95a-dc13-49e9-9348-0360ae322062-node-certs\") pod \"calico-node-4tmh9\" (UID: \"05f6c95a-dc13-49e9-9348-0360ae322062\") " pod="calico-system/calico-node-4tmh9" Sep 13 00:43:53.059683 kubelet[3069]: I0913 00:43:53.059571 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05f6c95a-dc13-49e9-9348-0360ae322062-xtables-lock\") pod \"calico-node-4tmh9\" (UID: \"05f6c95a-dc13-49e9-9348-0360ae322062\") " pod="calico-system/calico-node-4tmh9" Sep 13 00:43:53.059683 kubelet[3069]: I0913 00:43:53.059661 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05f6c95a-dc13-49e9-9348-0360ae322062-var-lib-calico\") pod \"calico-node-4tmh9\" (UID: \"05f6c95a-dc13-49e9-9348-0360ae322062\") " pod="calico-system/calico-node-4tmh9" Sep 13 00:43:53.059963 kubelet[3069]: I0913 00:43:53.059752 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/05f6c95a-dc13-49e9-9348-0360ae322062-cni-bin-dir\") pod \"calico-node-4tmh9\" (UID: \"05f6c95a-dc13-49e9-9348-0360ae322062\") " pod="calico-system/calico-node-4tmh9" Sep 13 00:43:53.059963 kubelet[3069]: I0913 00:43:53.059841 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/05f6c95a-dc13-49e9-9348-0360ae322062-cni-log-dir\") pod \"calico-node-4tmh9\" (UID: \"05f6c95a-dc13-49e9-9348-0360ae322062\") " pod="calico-system/calico-node-4tmh9" Sep 13 00:43:53.059963 kubelet[3069]: I0913 00:43:53.059924 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/05f6c95a-dc13-49e9-9348-0360ae322062-flexvol-driver-host\") pod \"calico-node-4tmh9\" (UID: \"05f6c95a-dc13-49e9-9348-0360ae322062\") " pod="calico-system/calico-node-4tmh9" Sep 13 00:43:53.060256 kubelet[3069]: I0913 00:43:53.060038 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/05f6c95a-dc13-49e9-9348-0360ae322062-policysync\") pod \"calico-node-4tmh9\" (UID: \"05f6c95a-dc13-49e9-9348-0360ae322062\") " pod="calico-system/calico-node-4tmh9" Sep 13 00:43:53.060256 kubelet[3069]: I0913 00:43:53.060141 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/05f6c95a-dc13-49e9-9348-0360ae322062-cni-net-dir\") pod \"calico-node-4tmh9\" (UID: \"05f6c95a-dc13-49e9-9348-0360ae322062\") " pod="calico-system/calico-node-4tmh9" Sep 13 00:43:53.060256 kubelet[3069]: I0913 00:43:53.060219 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05f6c95a-dc13-49e9-9348-0360ae322062-lib-modules\") pod \"calico-node-4tmh9\" (UID: \"05f6c95a-dc13-49e9-9348-0360ae322062\") " pod="calico-system/calico-node-4tmh9" Sep 13 00:43:53.060514 kubelet[3069]: I0913 00:43:53.060298 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/05f6c95a-dc13-49e9-9348-0360ae322062-var-run-calico\") pod \"calico-node-4tmh9\" (UID: \"05f6c95a-dc13-49e9-9348-0360ae322062\") " pod="calico-system/calico-node-4tmh9" Sep 13 00:43:53.060514 kubelet[3069]: I0913 00:43:53.060405 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhfgx\" (UniqueName: \"kubernetes.io/projected/05f6c95a-dc13-49e9-9348-0360ae322062-kube-api-access-qhfgx\") pod \"calico-node-4tmh9\" (UID: \"05f6c95a-dc13-49e9-9348-0360ae322062\") " pod="calico-system/calico-node-4tmh9" Sep 13 00:43:53.164440 kubelet[3069]: E0913 00:43:53.164252 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.164440 kubelet[3069]: W0913 00:43:53.164307 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.164440 kubelet[3069]: E0913 00:43:53.164366 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.168650 kubelet[3069]: E0913 00:43:53.168605 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.168650 kubelet[3069]: W0913 00:43:53.168642 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.169094 kubelet[3069]: E0913 00:43:53.168684 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.181421 kubelet[3069]: E0913 00:43:53.181369 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.181421 kubelet[3069]: W0913 00:43:53.181410 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.181699 kubelet[3069]: E0913 00:43:53.181450 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.239694 kubelet[3069]: E0913 00:43:53.239584 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9v25t" podUID="c3e12566-608c-47e4-9de0-c2f38136e9e0" Sep 13 00:43:53.241182 containerd[1816]: time="2025-09-13T00:43:53.241104613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4tmh9,Uid:05f6c95a-dc13-49e9-9348-0360ae322062,Namespace:calico-system,Attempt:0,}" Sep 13 00:43:53.251775 containerd[1816]: time="2025-09-13T00:43:53.251728392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:43:53.251856 containerd[1816]: time="2025-09-13T00:43:53.251798624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:43:53.252027 containerd[1816]: time="2025-09-13T00:43:53.252012747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:53.252075 containerd[1816]: time="2025-09-13T00:43:53.252065186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:43:53.260848 kubelet[3069]: E0913 00:43:53.260832 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.260848 kubelet[3069]: W0913 00:43:53.260845 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.260949 kubelet[3069]: E0913 00:43:53.260857 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.260978 kubelet[3069]: E0913 00:43:53.260973 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.260999 kubelet[3069]: W0913 00:43:53.260978 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.260999 kubelet[3069]: E0913 00:43:53.260983 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.261139 kubelet[3069]: E0913 00:43:53.261132 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.261160 kubelet[3069]: W0913 00:43:53.261139 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.261160 kubelet[3069]: E0913 00:43:53.261146 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.261255 kubelet[3069]: E0913 00:43:53.261249 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.261255 kubelet[3069]: W0913 00:43:53.261254 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.261304 kubelet[3069]: E0913 00:43:53.261259 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.261355 kubelet[3069]: E0913 00:43:53.261350 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.261375 kubelet[3069]: W0913 00:43:53.261355 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.261375 kubelet[3069]: E0913 00:43:53.261360 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.261439 kubelet[3069]: E0913 00:43:53.261430 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.261439 kubelet[3069]: W0913 00:43:53.261435 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.261439 kubelet[3069]: E0913 00:43:53.261440 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.261553 kubelet[3069]: E0913 00:43:53.261519 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.261553 kubelet[3069]: W0913 00:43:53.261523 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.261553 kubelet[3069]: E0913 00:43:53.261528 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.261645 kubelet[3069]: E0913 00:43:53.261596 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.261645 kubelet[3069]: W0913 00:43:53.261601 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.261645 kubelet[3069]: E0913 00:43:53.261605 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.261730 kubelet[3069]: E0913 00:43:53.261675 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.261730 kubelet[3069]: W0913 00:43:53.261680 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.261730 kubelet[3069]: E0913 00:43:53.261685 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.261817 kubelet[3069]: E0913 00:43:53.261748 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.261817 kubelet[3069]: W0913 00:43:53.261752 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.261817 kubelet[3069]: E0913 00:43:53.261756 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.261871 kubelet[3069]: E0913 00:43:53.261826 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.261871 kubelet[3069]: W0913 00:43:53.261831 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.261871 kubelet[3069]: E0913 00:43:53.261836 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.261915 kubelet[3069]: E0913 00:43:53.261901 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.261915 kubelet[3069]: W0913 00:43:53.261906 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.261915 kubelet[3069]: E0913 00:43:53.261910 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.261998 kubelet[3069]: E0913 00:43:53.261992 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.261998 kubelet[3069]: W0913 00:43:53.261997 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.262046 kubelet[3069]: E0913 00:43:53.262001 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.262087 kubelet[3069]: E0913 00:43:53.262082 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.262104 kubelet[3069]: W0913 00:43:53.262086 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.262104 kubelet[3069]: E0913 00:43:53.262091 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.262159 kubelet[3069]: E0913 00:43:53.262154 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.262159 kubelet[3069]: W0913 00:43:53.262159 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.262194 kubelet[3069]: E0913 00:43:53.262163 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.262234 kubelet[3069]: E0913 00:43:53.262229 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.262252 kubelet[3069]: W0913 00:43:53.262235 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.262252 kubelet[3069]: E0913 00:43:53.262239 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.262319 kubelet[3069]: E0913 00:43:53.262314 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.262319 kubelet[3069]: W0913 00:43:53.262319 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.262353 kubelet[3069]: E0913 00:43:53.262323 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.262432 kubelet[3069]: E0913 00:43:53.262427 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.262450 kubelet[3069]: W0913 00:43:53.262432 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.262450 kubelet[3069]: E0913 00:43:53.262436 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.262504 kubelet[3069]: E0913 00:43:53.262500 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.262522 kubelet[3069]: W0913 00:43:53.262504 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.262522 kubelet[3069]: E0913 00:43:53.262509 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.262578 kubelet[3069]: E0913 00:43:53.262573 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.262597 kubelet[3069]: W0913 00:43:53.262578 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.262597 kubelet[3069]: E0913 00:43:53.262583 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.262707 kubelet[3069]: E0913 00:43:53.262702 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.262707 kubelet[3069]: W0913 00:43:53.262706 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.262747 kubelet[3069]: E0913 00:43:53.262711 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.262747 kubelet[3069]: I0913 00:43:53.262724 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c3e12566-608c-47e4-9de0-c2f38136e9e0-varrun\") pod \"csi-node-driver-9v25t\" (UID: \"c3e12566-608c-47e4-9de0-c2f38136e9e0\") " pod="calico-system/csi-node-driver-9v25t" Sep 13 00:43:53.262812 kubelet[3069]: E0913 00:43:53.262806 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.262812 kubelet[3069]: W0913 00:43:53.262812 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.262848 kubelet[3069]: E0913 00:43:53.262818 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.262848 kubelet[3069]: I0913 00:43:53.262828 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g8cq\" (UniqueName: \"kubernetes.io/projected/c3e12566-608c-47e4-9de0-c2f38136e9e0-kube-api-access-2g8cq\") pod \"csi-node-driver-9v25t\" (UID: \"c3e12566-608c-47e4-9de0-c2f38136e9e0\") " pod="calico-system/csi-node-driver-9v25t" Sep 13 00:43:53.262909 kubelet[3069]: E0913 00:43:53.262904 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.262930 kubelet[3069]: W0913 00:43:53.262909 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.262930 kubelet[3069]: E0913 00:43:53.262915 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.262930 kubelet[3069]: I0913 00:43:53.262923 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3e12566-608c-47e4-9de0-c2f38136e9e0-kubelet-dir\") pod \"csi-node-driver-9v25t\" (UID: \"c3e12566-608c-47e4-9de0-c2f38136e9e0\") " pod="calico-system/csi-node-driver-9v25t" Sep 13 00:43:53.263022 kubelet[3069]: E0913 00:43:53.263008 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.263041 kubelet[3069]: W0913 00:43:53.263022 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.263041 kubelet[3069]: E0913 00:43:53.263030 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.263107 kubelet[3069]: E0913 00:43:53.263102 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.263128 kubelet[3069]: W0913 00:43:53.263107 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.263128 kubelet[3069]: E0913 00:43:53.263113 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.263199 kubelet[3069]: E0913 00:43:53.263194 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.263219 kubelet[3069]: W0913 00:43:53.263199 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.263219 kubelet[3069]: E0913 00:43:53.263205 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.263288 kubelet[3069]: E0913 00:43:53.263283 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.263288 kubelet[3069]: W0913 00:43:53.263288 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.263328 kubelet[3069]: E0913 00:43:53.263293 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.263370 kubelet[3069]: E0913 00:43:53.263365 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.263391 kubelet[3069]: W0913 00:43:53.263381 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.263391 kubelet[3069]: E0913 00:43:53.263387 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.263429 kubelet[3069]: I0913 00:43:53.263397 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c3e12566-608c-47e4-9de0-c2f38136e9e0-registration-dir\") pod \"csi-node-driver-9v25t\" (UID: \"c3e12566-608c-47e4-9de0-c2f38136e9e0\") " pod="calico-system/csi-node-driver-9v25t" Sep 13 00:43:53.263479 kubelet[3069]: E0913 00:43:53.263473 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.263500 kubelet[3069]: W0913 00:43:53.263479 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.263500 kubelet[3069]: E0913 00:43:53.263484 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.263500 kubelet[3069]: I0913 00:43:53.263492 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c3e12566-608c-47e4-9de0-c2f38136e9e0-socket-dir\") pod \"csi-node-driver-9v25t\" (UID: \"c3e12566-608c-47e4-9de0-c2f38136e9e0\") " pod="calico-system/csi-node-driver-9v25t" Sep 13 00:43:53.263564 kubelet[3069]: E0913 00:43:53.263558 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.263586 kubelet[3069]: W0913 00:43:53.263565 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.263586 kubelet[3069]: E0913 00:43:53.263573 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.263651 kubelet[3069]: E0913 00:43:53.263646 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.263669 kubelet[3069]: W0913 00:43:53.263651 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.263669 kubelet[3069]: E0913 00:43:53.263657 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.263740 kubelet[3069]: E0913 00:43:53.263736 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.263763 kubelet[3069]: W0913 00:43:53.263742 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.263763 kubelet[3069]: E0913 00:43:53.263751 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.263842 kubelet[3069]: E0913 00:43:53.263836 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.263861 kubelet[3069]: W0913 00:43:53.263843 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.263861 kubelet[3069]: E0913 00:43:53.263850 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.263935 kubelet[3069]: E0913 00:43:53.263930 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.263957 kubelet[3069]: W0913 00:43:53.263935 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.263957 kubelet[3069]: E0913 00:43:53.263940 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.264018 kubelet[3069]: E0913 00:43:53.264013 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.264018 kubelet[3069]: W0913 00:43:53.264018 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.264056 kubelet[3069]: E0913 00:43:53.264023 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.273219 systemd[1]: Started cri-containerd-f8749c904004abd0c1482634e7f96b1a8c6b0b4d3007874819c3b724b1a65b60.scope - libcontainer container f8749c904004abd0c1482634e7f96b1a8c6b0b4d3007874819c3b724b1a65b60. Sep 13 00:43:53.283020 containerd[1816]: time="2025-09-13T00:43:53.282990596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4tmh9,Uid:05f6c95a-dc13-49e9-9348-0360ae322062,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8749c904004abd0c1482634e7f96b1a8c6b0b4d3007874819c3b724b1a65b60\"" Sep 13 00:43:53.364468 kubelet[3069]: E0913 00:43:53.364413 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.364468 kubelet[3069]: W0913 00:43:53.364458 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.364934 kubelet[3069]: E0913 00:43:53.364515 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.365160 kubelet[3069]: E0913 00:43:53.365121 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.365358 kubelet[3069]: W0913 00:43:53.365163 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.365358 kubelet[3069]: E0913 00:43:53.365218 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.365921 kubelet[3069]: E0913 00:43:53.365869 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.366177 kubelet[3069]: W0913 00:43:53.365923 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.366177 kubelet[3069]: E0913 00:43:53.365987 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.366575 kubelet[3069]: E0913 00:43:53.366541 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.366707 kubelet[3069]: W0913 00:43:53.366580 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.366707 kubelet[3069]: E0913 00:43:53.366631 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.367280 kubelet[3069]: E0913 00:43:53.367222 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.367280 kubelet[3069]: W0913 00:43:53.367263 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.367628 kubelet[3069]: E0913 00:43:53.367352 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.367845 kubelet[3069]: E0913 00:43:53.367787 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.367845 kubelet[3069]: W0913 00:43:53.367829 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.368151 kubelet[3069]: E0913 00:43:53.367923 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.368463 kubelet[3069]: E0913 00:43:53.368408 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.368463 kubelet[3069]: W0913 00:43:53.368449 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.368716 kubelet[3069]: E0913 00:43:53.368545 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.369164 kubelet[3069]: E0913 00:43:53.369104 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.369164 kubelet[3069]: W0913 00:43:53.369140 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.369451 kubelet[3069]: E0913 00:43:53.369182 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.369859 kubelet[3069]: E0913 00:43:53.369819 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.369972 kubelet[3069]: W0913 00:43:53.369865 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.369972 kubelet[3069]: E0913 00:43:53.369933 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.370651 kubelet[3069]: E0913 00:43:53.370598 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.370651 kubelet[3069]: W0913 00:43:53.370646 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.370926 kubelet[3069]: E0913 00:43:53.370706 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.371302 kubelet[3069]: E0913 00:43:53.371269 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.371448 kubelet[3069]: W0913 00:43:53.371304 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.371448 kubelet[3069]: E0913 00:43:53.371345 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.371779 kubelet[3069]: E0913 00:43:53.371752 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.371944 kubelet[3069]: W0913 00:43:53.371785 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.371944 kubelet[3069]: E0913 00:43:53.371850 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.372288 kubelet[3069]: E0913 00:43:53.372243 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.372288 kubelet[3069]: W0913 00:43:53.372267 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.372582 kubelet[3069]: E0913 00:43:53.372327 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.372798 kubelet[3069]: E0913 00:43:53.372689 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.372798 kubelet[3069]: W0913 00:43:53.372721 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.372798 kubelet[3069]: E0913 00:43:53.372792 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.373316 kubelet[3069]: E0913 00:43:53.373187 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.373316 kubelet[3069]: W0913 00:43:53.373213 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.373316 kubelet[3069]: E0913 00:43:53.373293 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.373804 kubelet[3069]: E0913 00:43:53.373663 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.373804 kubelet[3069]: W0913 00:43:53.373688 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.373804 kubelet[3069]: E0913 00:43:53.373761 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.374307 kubelet[3069]: E0913 00:43:53.374206 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.374307 kubelet[3069]: W0913 00:43:53.374230 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.374649 kubelet[3069]: E0913 00:43:53.374338 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.374838 kubelet[3069]: E0913 00:43:53.374711 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.374838 kubelet[3069]: W0913 00:43:53.374735 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.375178 kubelet[3069]: E0913 00:43:53.374839 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.375357 kubelet[3069]: E0913 00:43:53.375192 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.375357 kubelet[3069]: W0913 00:43:53.375218 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.375357 kubelet[3069]: E0913 00:43:53.375327 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.375871 kubelet[3069]: E0913 00:43:53.375685 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.375871 kubelet[3069]: W0913 00:43:53.375709 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.375871 kubelet[3069]: E0913 00:43:53.375740 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.376392 kubelet[3069]: E0913 00:43:53.376180 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.376392 kubelet[3069]: W0913 00:43:53.376204 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.376392 kubelet[3069]: E0913 00:43:53.376236 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.376822 kubelet[3069]: E0913 00:43:53.376765 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.376822 kubelet[3069]: W0913 00:43:53.376790 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.377110 kubelet[3069]: E0913 00:43:53.376888 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.377210 kubelet[3069]: E0913 00:43:53.377180 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.377210 kubelet[3069]: W0913 00:43:53.377202 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.377407 kubelet[3069]: E0913 00:43:53.377307 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.377716 kubelet[3069]: E0913 00:43:53.377670 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.377716 kubelet[3069]: W0913 00:43:53.377707 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.378079 kubelet[3069]: E0913 00:43:53.377753 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.378819 kubelet[3069]: E0913 00:43:53.378763 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.378819 kubelet[3069]: W0913 00:43:53.378817 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.379119 kubelet[3069]: E0913 00:43:53.378869 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:53.390296 kubelet[3069]: E0913 00:43:53.390283 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:53.390296 kubelet[3069]: W0913 00:43:53.390293 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:53.390383 kubelet[3069]: E0913 00:43:53.390305 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:54.403657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount966777223.mount: Deactivated successfully. Sep 13 00:43:54.438632 kubelet[3069]: E0913 00:43:54.438610 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9v25t" podUID="c3e12566-608c-47e4-9de0-c2f38136e9e0" Sep 13 00:43:54.752520 containerd[1816]: time="2025-09-13T00:43:54.752422086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:54.752763 containerd[1816]: time="2025-09-13T00:43:54.752652420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:43:54.753015 containerd[1816]: time="2025-09-13T00:43:54.752999464Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:54.754504 containerd[1816]: time="2025-09-13T00:43:54.754485781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:54.755127 containerd[1816]: time="2025-09-13T00:43:54.755112878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 1.794919974s" Sep 13 00:43:54.755165 containerd[1816]: time="2025-09-13T00:43:54.755130819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:43:54.755586 containerd[1816]: time="2025-09-13T00:43:54.755574928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:43:54.758569 containerd[1816]: time="2025-09-13T00:43:54.758550521Z" level=info msg="CreateContainer within sandbox \"920f82599901d6c094e2554a91676f1ca9a144599b373845bebae0e3d4a82228\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:43:54.762749 containerd[1816]: time="2025-09-13T00:43:54.762707318Z" level=info msg="CreateContainer within sandbox \"920f82599901d6c094e2554a91676f1ca9a144599b373845bebae0e3d4a82228\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"69c85c8e3e7d762ace6e667423e7d06116c7d4e9b13fb803c457184629dec627\"" Sep 13 00:43:54.762951 containerd[1816]: time="2025-09-13T00:43:54.762940970Z" level=info msg="StartContainer for \"69c85c8e3e7d762ace6e667423e7d06116c7d4e9b13fb803c457184629dec627\"" Sep 13 00:43:54.837478 systemd[1]: Started cri-containerd-69c85c8e3e7d762ace6e667423e7d06116c7d4e9b13fb803c457184629dec627.scope - libcontainer container 69c85c8e3e7d762ace6e667423e7d06116c7d4e9b13fb803c457184629dec627. Sep 13 00:43:54.913821 containerd[1816]: time="2025-09-13T00:43:54.913795046Z" level=info msg="StartContainer for \"69c85c8e3e7d762ace6e667423e7d06116c7d4e9b13fb803c457184629dec627\" returns successfully" Sep 13 00:43:55.519311 kubelet[3069]: I0913 00:43:55.519218 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d5485db55-n2wpn" podStartSLOduration=1.723676982 podStartE2EDuration="3.519188063s" podCreationTimestamp="2025-09-13 00:43:52 +0000 UTC" firstStartedPulling="2025-09-13 00:43:52.960008095 +0000 UTC m=+17.567510172" lastFinishedPulling="2025-09-13 00:43:54.755519182 +0000 UTC m=+19.363021253" observedRunningTime="2025-09-13 00:43:55.518931851 +0000 UTC m=+20.126433964" watchObservedRunningTime="2025-09-13 00:43:55.519188063 +0000 UTC m=+20.126690167" Sep 13 00:43:55.585855 kubelet[3069]: E0913 00:43:55.585755 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.585855 kubelet[3069]: W0913 00:43:55.585810 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.585855 kubelet[3069]: E0913 00:43:55.585853 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.586458 kubelet[3069]: E0913 00:43:55.586378 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.586458 kubelet[3069]: W0913 00:43:55.586415 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.586458 kubelet[3069]: E0913 00:43:55.586453 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.587356 kubelet[3069]: E0913 00:43:55.587272 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.587356 kubelet[3069]: W0913 00:43:55.587312 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.587356 kubelet[3069]: E0913 00:43:55.587348 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.587984 kubelet[3069]: E0913 00:43:55.587940 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.587984 kubelet[3069]: W0913 00:43:55.587976 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.588315 kubelet[3069]: E0913 00:43:55.588062 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.588699 kubelet[3069]: E0913 00:43:55.588648 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.588699 kubelet[3069]: W0913 00:43:55.588683 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.588914 kubelet[3069]: E0913 00:43:55.588716 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.589338 kubelet[3069]: E0913 00:43:55.589280 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.589338 kubelet[3069]: W0913 00:43:55.589309 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.589338 kubelet[3069]: E0913 00:43:55.589336 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.589854 kubelet[3069]: E0913 00:43:55.589822 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.589854 kubelet[3069]: W0913 00:43:55.589848 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.590082 kubelet[3069]: E0913 00:43:55.589873 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.590439 kubelet[3069]: E0913 00:43:55.590404 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.590439 kubelet[3069]: W0913 00:43:55.590431 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.590677 kubelet[3069]: E0913 00:43:55.590457 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.590998 kubelet[3069]: E0913 00:43:55.590963 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.590998 kubelet[3069]: W0913 00:43:55.590989 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.591270 kubelet[3069]: E0913 00:43:55.591033 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.591526 kubelet[3069]: E0913 00:43:55.591493 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.591526 kubelet[3069]: W0913 00:43:55.591518 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.591767 kubelet[3069]: E0913 00:43:55.591543 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.592095 kubelet[3069]: E0913 00:43:55.592006 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.592095 kubelet[3069]: W0913 00:43:55.592068 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.592095 kubelet[3069]: E0913 00:43:55.592093 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.592584 kubelet[3069]: E0913 00:43:55.592550 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.592584 kubelet[3069]: W0913 00:43:55.592575 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.592847 kubelet[3069]: E0913 00:43:55.592600 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.593085 kubelet[3069]: E0913 00:43:55.593052 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.593085 kubelet[3069]: W0913 00:43:55.593076 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.593329 kubelet[3069]: E0913 00:43:55.593099 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.593611 kubelet[3069]: E0913 00:43:55.593578 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.593611 kubelet[3069]: W0913 00:43:55.593603 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.593851 kubelet[3069]: E0913 00:43:55.593627 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.594124 kubelet[3069]: E0913 00:43:55.594091 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.594124 kubelet[3069]: W0913 00:43:55.594115 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.594332 kubelet[3069]: E0913 00:43:55.594138 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.594771 kubelet[3069]: E0913 00:43:55.594729 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.594771 kubelet[3069]: W0913 00:43:55.594755 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.595092 kubelet[3069]: E0913 00:43:55.594782 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.595338 kubelet[3069]: E0913 00:43:55.595302 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.595338 kubelet[3069]: W0913 00:43:55.595328 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.595548 kubelet[3069]: E0913 00:43:55.595362 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.595897 kubelet[3069]: E0913 00:43:55.595863 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.595897 kubelet[3069]: W0913 00:43:55.595892 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.596160 kubelet[3069]: E0913 00:43:55.595922 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.596654 kubelet[3069]: E0913 00:43:55.596568 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.596654 kubelet[3069]: W0913 00:43:55.596627 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.596887 kubelet[3069]: E0913 00:43:55.596689 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.597218 kubelet[3069]: E0913 00:43:55.597184 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.597218 kubelet[3069]: W0913 00:43:55.597211 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.597463 kubelet[3069]: E0913 00:43:55.597245 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.597758 kubelet[3069]: E0913 00:43:55.597703 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.597758 kubelet[3069]: W0913 00:43:55.597727 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.598078 kubelet[3069]: E0913 00:43:55.597802 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.598206 kubelet[3069]: E0913 00:43:55.598173 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.598206 kubelet[3069]: W0913 00:43:55.598197 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.598440 kubelet[3069]: E0913 00:43:55.598271 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.598725 kubelet[3069]: E0913 00:43:55.598658 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.598725 kubelet[3069]: W0913 00:43:55.598684 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.598952 kubelet[3069]: E0913 00:43:55.598752 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.599135 kubelet[3069]: E0913 00:43:55.599094 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.599135 kubelet[3069]: W0913 00:43:55.599119 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.599459 kubelet[3069]: E0913 00:43:55.599150 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.599769 kubelet[3069]: E0913 00:43:55.599725 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.599769 kubelet[3069]: W0913 00:43:55.599763 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.600124 kubelet[3069]: E0913 00:43:55.599819 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.600393 kubelet[3069]: E0913 00:43:55.600357 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.600566 kubelet[3069]: W0913 00:43:55.600391 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.600566 kubelet[3069]: E0913 00:43:55.600491 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.600935 kubelet[3069]: E0913 00:43:55.600899 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.601129 kubelet[3069]: W0913 00:43:55.600933 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.601129 kubelet[3069]: E0913 00:43:55.600998 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.601618 kubelet[3069]: E0913 00:43:55.601579 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.601618 kubelet[3069]: W0913 00:43:55.601615 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.601951 kubelet[3069]: E0913 00:43:55.601713 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.602248 kubelet[3069]: E0913 00:43:55.602210 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.602452 kubelet[3069]: W0913 00:43:55.602246 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.602452 kubelet[3069]: E0913 00:43:55.602358 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.602901 kubelet[3069]: E0913 00:43:55.602860 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.602901 kubelet[3069]: W0913 00:43:55.602894 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.603313 kubelet[3069]: E0913 00:43:55.602958 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.603506 kubelet[3069]: E0913 00:43:55.603472 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.603695 kubelet[3069]: W0913 00:43:55.603507 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.603695 kubelet[3069]: E0913 00:43:55.603561 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.604280 kubelet[3069]: E0913 00:43:55.604238 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.604280 kubelet[3069]: W0913 00:43:55.604273 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.604598 kubelet[3069]: E0913 00:43:55.604322 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:55.604929 kubelet[3069]: E0913 00:43:55.604889 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:55.604929 kubelet[3069]: W0913 00:43:55.604922 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:55.605184 kubelet[3069]: E0913 00:43:55.604963 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.439185 kubelet[3069]: E0913 00:43:56.439072 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9v25t" podUID="c3e12566-608c-47e4-9de0-c2f38136e9e0" Sep 13 00:43:56.504986 kubelet[3069]: I0913 00:43:56.504973 3069 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:43:56.562414 containerd[1816]: time="2025-09-13T00:43:56.562384951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:56.562653 containerd[1816]: time="2025-09-13T00:43:56.562627907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:43:56.563074 containerd[1816]: time="2025-09-13T00:43:56.563022555Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:56.563972 containerd[1816]: time="2025-09-13T00:43:56.563956643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:56.564450 containerd[1816]: time="2025-09-13T00:43:56.564434339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.808844808s" Sep 13 00:43:56.564486 containerd[1816]: time="2025-09-13T00:43:56.564453699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:43:56.565528 containerd[1816]: time="2025-09-13T00:43:56.565516529Z" level=info msg="CreateContainer within sandbox \"f8749c904004abd0c1482634e7f96b1a8c6b0b4d3007874819c3b724b1a65b60\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:43:56.570450 containerd[1816]: time="2025-09-13T00:43:56.570434733Z" level=info msg="CreateContainer within sandbox \"f8749c904004abd0c1482634e7f96b1a8c6b0b4d3007874819c3b724b1a65b60\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f02c3b93eacccb0d6c60d46b6481146d575f45f3a0c72acd3451cfab621505b0\"" Sep 13 00:43:56.570773 containerd[1816]: time="2025-09-13T00:43:56.570760904Z" level=info msg="StartContainer for \"f02c3b93eacccb0d6c60d46b6481146d575f45f3a0c72acd3451cfab621505b0\"" Sep 13 00:43:56.599467 systemd[1]: Started cri-containerd-f02c3b93eacccb0d6c60d46b6481146d575f45f3a0c72acd3451cfab621505b0.scope - libcontainer container f02c3b93eacccb0d6c60d46b6481146d575f45f3a0c72acd3451cfab621505b0. Sep 13 00:43:56.601750 kubelet[3069]: E0913 00:43:56.601662 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.601750 kubelet[3069]: W0913 00:43:56.601709 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.601750 kubelet[3069]: E0913 00:43:56.601754 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.602665 kubelet[3069]: E0913 00:43:56.602312 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.602665 kubelet[3069]: W0913 00:43:56.602340 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.602665 kubelet[3069]: E0913 00:43:56.602370 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.603091 kubelet[3069]: E0913 00:43:56.602893 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.603091 kubelet[3069]: W0913 00:43:56.602921 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.603091 kubelet[3069]: E0913 00:43:56.602962 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.603600 kubelet[3069]: E0913 00:43:56.603528 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.603600 kubelet[3069]: W0913 00:43:56.603566 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.603600 kubelet[3069]: E0913 00:43:56.603597 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.604395 kubelet[3069]: E0913 00:43:56.604351 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.604543 kubelet[3069]: W0913 00:43:56.604405 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.604543 kubelet[3069]: E0913 00:43:56.604447 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.605043 kubelet[3069]: E0913 00:43:56.604987 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.605167 kubelet[3069]: W0913 00:43:56.605049 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.605167 kubelet[3069]: E0913 00:43:56.605082 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.605653 kubelet[3069]: E0913 00:43:56.605615 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.605832 kubelet[3069]: W0913 00:43:56.605654 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.605832 kubelet[3069]: E0913 00:43:56.605715 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.606409 kubelet[3069]: E0913 00:43:56.606345 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.606409 kubelet[3069]: W0913 00:43:56.606389 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.606640 kubelet[3069]: E0913 00:43:56.606444 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.607149 kubelet[3069]: E0913 00:43:56.607114 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.607149 kubelet[3069]: W0913 00:43:56.607143 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.607444 kubelet[3069]: E0913 00:43:56.607174 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.607715 kubelet[3069]: E0913 00:43:56.607679 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.607865 kubelet[3069]: W0913 00:43:56.607717 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.607865 kubelet[3069]: E0913 00:43:56.607752 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.608255 kubelet[3069]: E0913 00:43:56.608221 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.608402 kubelet[3069]: W0913 00:43:56.608255 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.608402 kubelet[3069]: E0913 00:43:56.608295 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.608809 kubelet[3069]: E0913 00:43:56.608778 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.608809 kubelet[3069]: W0913 00:43:56.608804 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.609099 kubelet[3069]: E0913 00:43:56.608830 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.609336 kubelet[3069]: E0913 00:43:56.609303 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.609336 kubelet[3069]: W0913 00:43:56.609329 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.609611 kubelet[3069]: E0913 00:43:56.609355 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.609842 kubelet[3069]: E0913 00:43:56.609813 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.609969 kubelet[3069]: W0913 00:43:56.609842 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.609969 kubelet[3069]: E0913 00:43:56.609868 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.610361 kubelet[3069]: E0913 00:43:56.610330 3069 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:43:56.610477 kubelet[3069]: W0913 00:43:56.610358 3069 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:43:56.610477 kubelet[3069]: E0913 00:43:56.610385 3069 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:43:56.632967 containerd[1816]: time="2025-09-13T00:43:56.632941284Z" level=info msg="StartContainer for \"f02c3b93eacccb0d6c60d46b6481146d575f45f3a0c72acd3451cfab621505b0\" returns successfully" Sep 13 00:43:56.638878 systemd[1]: cri-containerd-f02c3b93eacccb0d6c60d46b6481146d575f45f3a0c72acd3451cfab621505b0.scope: Deactivated successfully. Sep 13 00:43:56.652626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f02c3b93eacccb0d6c60d46b6481146d575f45f3a0c72acd3451cfab621505b0-rootfs.mount: Deactivated successfully. Sep 13 00:43:56.890996 containerd[1816]: time="2025-09-13T00:43:56.890925702Z" level=info msg="shim disconnected" id=f02c3b93eacccb0d6c60d46b6481146d575f45f3a0c72acd3451cfab621505b0 namespace=k8s.io Sep 13 00:43:56.890996 containerd[1816]: time="2025-09-13T00:43:56.890962753Z" level=warning msg="cleaning up after shim disconnected" id=f02c3b93eacccb0d6c60d46b6481146d575f45f3a0c72acd3451cfab621505b0 namespace=k8s.io Sep 13 00:43:56.890996 containerd[1816]: time="2025-09-13T00:43:56.890968989Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:43:57.513361 containerd[1816]: time="2025-09-13T00:43:57.513256936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:43:58.439189 kubelet[3069]: E0913 00:43:58.439071 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9v25t" podUID="c3e12566-608c-47e4-9de0-c2f38136e9e0" Sep 13 00:43:59.899994 containerd[1816]: time="2025-09-13T00:43:59.899966507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:59.900224 containerd[1816]: time="2025-09-13T00:43:59.900109075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:43:59.900434 containerd[1816]: time="2025-09-13T00:43:59.900398431Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:59.901513 containerd[1816]: time="2025-09-13T00:43:59.901497332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:43:59.901976 containerd[1816]: time="2025-09-13T00:43:59.901962587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.388643214s" Sep 13 00:43:59.902025 containerd[1816]: time="2025-09-13T00:43:59.901976692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:43:59.903641 containerd[1816]: time="2025-09-13T00:43:59.903628725Z" level=info msg="CreateContainer within sandbox \"f8749c904004abd0c1482634e7f96b1a8c6b0b4d3007874819c3b724b1a65b60\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:43:59.908278 containerd[1816]: time="2025-09-13T00:43:59.908261721Z" level=info msg="CreateContainer within sandbox \"f8749c904004abd0c1482634e7f96b1a8c6b0b4d3007874819c3b724b1a65b60\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"46a1ac4e060ec2ab8a0464d3635a977271b6ce8059de93a055f4494ed4e79088\"" Sep 13 00:43:59.908495 containerd[1816]: time="2025-09-13T00:43:59.908482578Z" level=info msg="StartContainer for \"46a1ac4e060ec2ab8a0464d3635a977271b6ce8059de93a055f4494ed4e79088\"" Sep 13 00:43:59.934333 systemd[1]: Started cri-containerd-46a1ac4e060ec2ab8a0464d3635a977271b6ce8059de93a055f4494ed4e79088.scope - libcontainer container 46a1ac4e060ec2ab8a0464d3635a977271b6ce8059de93a055f4494ed4e79088. Sep 13 00:43:59.961529 containerd[1816]: time="2025-09-13T00:43:59.961469196Z" level=info msg="StartContainer for \"46a1ac4e060ec2ab8a0464d3635a977271b6ce8059de93a055f4494ed4e79088\" returns successfully" Sep 13 00:44:00.438774 kubelet[3069]: E0913 00:44:00.438716 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9v25t" podUID="c3e12566-608c-47e4-9de0-c2f38136e9e0" Sep 13 00:44:00.554066 containerd[1816]: time="2025-09-13T00:44:00.553932747Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:44:00.554958 systemd[1]: cri-containerd-46a1ac4e060ec2ab8a0464d3635a977271b6ce8059de93a055f4494ed4e79088.scope: Deactivated successfully. Sep 13 00:44:00.592191 kubelet[3069]: I0913 00:44:00.592168 3069 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:44:00.623614 systemd[1]: Created slice kubepods-besteffort-pod72b6dc4c_0f55_48ba_b4eb_7772e5d7ba11.slice - libcontainer container kubepods-besteffort-pod72b6dc4c_0f55_48ba_b4eb_7772e5d7ba11.slice. Sep 13 00:44:00.633125 systemd[1]: Created slice kubepods-burstable-pod7760760a_7878_4b4e_8517_c3f27b4755d1.slice - libcontainer container kubepods-burstable-pod7760760a_7878_4b4e_8517_c3f27b4755d1.slice. Sep 13 00:44:00.644803 systemd[1]: Created slice kubepods-burstable-podeb45447f_5bf3_45bc_8523_f45a74830305.slice - libcontainer container kubepods-burstable-podeb45447f_5bf3_45bc_8523_f45a74830305.slice. Sep 13 00:44:00.652586 systemd[1]: Created slice kubepods-besteffort-podfc068cc0_2f07_4c24_a603_b6967550f6d8.slice - libcontainer container kubepods-besteffort-podfc068cc0_2f07_4c24_a603_b6967550f6d8.slice. Sep 13 00:44:00.661160 systemd[1]: Created slice kubepods-besteffort-pod2406872a_e785_4950_8051_da7df7a2ca71.slice - libcontainer container kubepods-besteffort-pod2406872a_e785_4950_8051_da7df7a2ca71.slice. Sep 13 00:44:00.667999 systemd[1]: Created slice kubepods-besteffort-pod1f143f97_3706_4d57_9a71_b20a0d04227b.slice - libcontainer container kubepods-besteffort-pod1f143f97_3706_4d57_9a71_b20a0d04227b.slice. Sep 13 00:44:00.673050 systemd[1]: Created slice kubepods-besteffort-podb6aa3a1d_cf6a_40b2_a42d_f5eda57546a0.slice - libcontainer container kubepods-besteffort-podb6aa3a1d_cf6a_40b2_a42d_f5eda57546a0.slice. Sep 13 00:44:00.736480 kubelet[3069]: I0913 00:44:00.736239 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc068cc0-2f07-4c24-a603-b6967550f6d8-config\") pod \"goldmane-7988f88666-bjldk\" (UID: \"fc068cc0-2f07-4c24-a603-b6967550f6d8\") " pod="calico-system/goldmane-7988f88666-bjldk" Sep 13 00:44:00.736480 kubelet[3069]: I0913 00:44:00.736337 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc068cc0-2f07-4c24-a603-b6967550f6d8-goldmane-ca-bundle\") pod \"goldmane-7988f88666-bjldk\" (UID: \"fc068cc0-2f07-4c24-a603-b6967550f6d8\") " pod="calico-system/goldmane-7988f88666-bjldk" Sep 13 00:44:00.736480 kubelet[3069]: I0913 00:44:00.736420 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11-tigera-ca-bundle\") pod \"calico-kube-controllers-6d7d549755-rwftt\" (UID: \"72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11\") " pod="calico-system/calico-kube-controllers-6d7d549755-rwftt" Sep 13 00:44:00.736994 kubelet[3069]: I0913 00:44:00.736497 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1f143f97-3706-4d57-9a71-b20a0d04227b-calico-apiserver-certs\") pod \"calico-apiserver-6cdbb444-r7n8m\" (UID: \"1f143f97-3706-4d57-9a71-b20a0d04227b\") " pod="calico-apiserver/calico-apiserver-6cdbb444-r7n8m" Sep 13 00:44:00.736994 kubelet[3069]: I0913 00:44:00.736553 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7760760a-7878-4b4e-8517-c3f27b4755d1-config-volume\") pod \"coredns-7c65d6cfc9-sb6sx\" (UID: \"7760760a-7878-4b4e-8517-c3f27b4755d1\") " pod="kube-system/coredns-7c65d6cfc9-sb6sx" Sep 13 00:44:00.736994 kubelet[3069]: I0913 00:44:00.736613 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fc068cc0-2f07-4c24-a603-b6967550f6d8-goldmane-key-pair\") pod \"goldmane-7988f88666-bjldk\" (UID: \"fc068cc0-2f07-4c24-a603-b6967550f6d8\") " pod="calico-system/goldmane-7988f88666-bjldk" Sep 13 00:44:00.736994 kubelet[3069]: I0913 00:44:00.736675 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbgnl\" (UniqueName: \"kubernetes.io/projected/fc068cc0-2f07-4c24-a603-b6967550f6d8-kube-api-access-cbgnl\") pod \"goldmane-7988f88666-bjldk\" (UID: \"fc068cc0-2f07-4c24-a603-b6967550f6d8\") " pod="calico-system/goldmane-7988f88666-bjldk" Sep 13 00:44:00.736994 kubelet[3069]: I0913 00:44:00.736737 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2406872a-e785-4950-8051-da7df7a2ca71-whisker-ca-bundle\") pod \"whisker-677b4bc6cc-rc2zn\" (UID: \"2406872a-e785-4950-8051-da7df7a2ca71\") " pod="calico-system/whisker-677b4bc6cc-rc2zn" Sep 13 00:44:00.737633 kubelet[3069]: I0913 00:44:00.736799 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0-calico-apiserver-certs\") pod \"calico-apiserver-6cdbb444-7d78s\" (UID: \"b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0\") " pod="calico-apiserver/calico-apiserver-6cdbb444-7d78s" Sep 13 00:44:00.737633 kubelet[3069]: I0913 00:44:00.736871 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z99st\" (UniqueName: \"kubernetes.io/projected/1f143f97-3706-4d57-9a71-b20a0d04227b-kube-api-access-z99st\") pod \"calico-apiserver-6cdbb444-r7n8m\" (UID: \"1f143f97-3706-4d57-9a71-b20a0d04227b\") " pod="calico-apiserver/calico-apiserver-6cdbb444-r7n8m" Sep 13 00:44:00.737633 kubelet[3069]: I0913 00:44:00.736935 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwgfc\" (UniqueName: \"kubernetes.io/projected/b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0-kube-api-access-bwgfc\") pod \"calico-apiserver-6cdbb444-7d78s\" (UID: \"b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0\") " pod="calico-apiserver/calico-apiserver-6cdbb444-7d78s" Sep 13 00:44:00.737633 kubelet[3069]: I0913 00:44:00.736988 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh7tl\" (UniqueName: \"kubernetes.io/projected/7760760a-7878-4b4e-8517-c3f27b4755d1-kube-api-access-kh7tl\") pod \"coredns-7c65d6cfc9-sb6sx\" (UID: \"7760760a-7878-4b4e-8517-c3f27b4755d1\") " pod="kube-system/coredns-7c65d6cfc9-sb6sx" Sep 13 00:44:00.737633 kubelet[3069]: I0913 00:44:00.737074 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb45447f-5bf3-45bc-8523-f45a74830305-config-volume\") pod \"coredns-7c65d6cfc9-crrvn\" (UID: \"eb45447f-5bf3-45bc-8523-f45a74830305\") " pod="kube-system/coredns-7c65d6cfc9-crrvn" Sep 13 00:44:00.738189 kubelet[3069]: I0913 00:44:00.737139 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btvbf\" (UniqueName: \"kubernetes.io/projected/2406872a-e785-4950-8051-da7df7a2ca71-kube-api-access-btvbf\") pod \"whisker-677b4bc6cc-rc2zn\" (UID: \"2406872a-e785-4950-8051-da7df7a2ca71\") " pod="calico-system/whisker-677b4bc6cc-rc2zn" Sep 13 00:44:00.738189 kubelet[3069]: I0913 00:44:00.737206 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2406872a-e785-4950-8051-da7df7a2ca71-whisker-backend-key-pair\") pod \"whisker-677b4bc6cc-rc2zn\" (UID: \"2406872a-e785-4950-8051-da7df7a2ca71\") " pod="calico-system/whisker-677b4bc6cc-rc2zn" Sep 13 00:44:00.738189 kubelet[3069]: I0913 00:44:00.737259 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p44j\" (UniqueName: \"kubernetes.io/projected/72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11-kube-api-access-6p44j\") pod \"calico-kube-controllers-6d7d549755-rwftt\" (UID: \"72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11\") " pod="calico-system/calico-kube-controllers-6d7d549755-rwftt" Sep 13 00:44:00.738189 kubelet[3069]: I0913 00:44:00.737391 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qg2f\" (UniqueName: \"kubernetes.io/projected/eb45447f-5bf3-45bc-8523-f45a74830305-kube-api-access-2qg2f\") pod \"coredns-7c65d6cfc9-crrvn\" (UID: \"eb45447f-5bf3-45bc-8523-f45a74830305\") " pod="kube-system/coredns-7c65d6cfc9-crrvn" Sep 13 00:44:00.929314 containerd[1816]: time="2025-09-13T00:44:00.929228588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d7d549755-rwftt,Uid:72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11,Namespace:calico-system,Attempt:0,}" Sep 13 00:44:00.929986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46a1ac4e060ec2ab8a0464d3635a977271b6ce8059de93a055f4494ed4e79088-rootfs.mount: Deactivated successfully. Sep 13 00:44:00.937940 containerd[1816]: time="2025-09-13T00:44:00.937920702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sb6sx,Uid:7760760a-7878-4b4e-8517-c3f27b4755d1,Namespace:kube-system,Attempt:0,}" Sep 13 00:44:00.949410 containerd[1816]: time="2025-09-13T00:44:00.949352228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-crrvn,Uid:eb45447f-5bf3-45bc-8523-f45a74830305,Namespace:kube-system,Attempt:0,}" Sep 13 00:44:00.955858 containerd[1816]: time="2025-09-13T00:44:00.955805577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bjldk,Uid:fc068cc0-2f07-4c24-a603-b6967550f6d8,Namespace:calico-system,Attempt:0,}" Sep 13 00:44:00.965863 containerd[1816]: time="2025-09-13T00:44:00.965832128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-677b4bc6cc-rc2zn,Uid:2406872a-e785-4950-8051-da7df7a2ca71,Namespace:calico-system,Attempt:0,}" Sep 13 00:44:00.971333 containerd[1816]: time="2025-09-13T00:44:00.971296300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cdbb444-r7n8m,Uid:1f143f97-3706-4d57-9a71-b20a0d04227b,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:44:00.975734 containerd[1816]: time="2025-09-13T00:44:00.975680812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cdbb444-7d78s,Uid:b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:44:00.977289 containerd[1816]: time="2025-09-13T00:44:00.977263013Z" level=info msg="shim disconnected" id=46a1ac4e060ec2ab8a0464d3635a977271b6ce8059de93a055f4494ed4e79088 namespace=k8s.io Sep 13 00:44:00.977326 containerd[1816]: time="2025-09-13T00:44:00.977289751Z" level=warning msg="cleaning up after shim disconnected" id=46a1ac4e060ec2ab8a0464d3635a977271b6ce8059de93a055f4494ed4e79088 namespace=k8s.io Sep 13 00:44:00.977326 containerd[1816]: time="2025-09-13T00:44:00.977299022Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:44:01.009468 containerd[1816]: time="2025-09-13T00:44:01.009332878Z" level=error msg="Failed to destroy network for sandbox \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.009468 containerd[1816]: time="2025-09-13T00:44:01.009355393Z" level=error msg="Failed to destroy network for sandbox \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.009611 containerd[1816]: time="2025-09-13T00:44:01.009593089Z" level=error msg="encountered an error cleaning up failed sandbox \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.009652 containerd[1816]: time="2025-09-13T00:44:01.009605938Z" level=error msg="encountered an error cleaning up failed sandbox \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.009692 containerd[1816]: time="2025-09-13T00:44:01.009657470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d7d549755-rwftt,Uid:72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.009757 containerd[1816]: time="2025-09-13T00:44:01.009629801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bjldk,Uid:fc068cc0-2f07-4c24-a603-b6967550f6d8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.009840 kubelet[3069]: E0913 00:44:01.009813 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.009904 kubelet[3069]: E0913 00:44:01.009880 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-bjldk" Sep 13 00:44:01.009943 kubelet[3069]: E0913 00:44:01.009813 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.009943 kubelet[3069]: E0913 00:44:01.009902 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-bjldk" Sep 13 00:44:01.009943 kubelet[3069]: E0913 00:44:01.009928 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d7d549755-rwftt" Sep 13 00:44:01.010097 kubelet[3069]: E0913 00:44:01.009946 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d7d549755-rwftt" Sep 13 00:44:01.010097 kubelet[3069]: E0913 00:44:01.009942 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-bjldk_calico-system(fc068cc0-2f07-4c24-a603-b6967550f6d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-bjldk_calico-system(fc068cc0-2f07-4c24-a603-b6967550f6d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-bjldk" podUID="fc068cc0-2f07-4c24-a603-b6967550f6d8" Sep 13 00:44:01.010186 kubelet[3069]: E0913 00:44:01.009972 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d7d549755-rwftt_calico-system(72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d7d549755-rwftt_calico-system(72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d7d549755-rwftt" podUID="72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11" Sep 13 00:44:01.010286 containerd[1816]: time="2025-09-13T00:44:01.010272333Z" level=error msg="Failed to destroy network for sandbox \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.010463 containerd[1816]: time="2025-09-13T00:44:01.010450990Z" level=error msg="encountered an error cleaning up failed sandbox \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.010491 containerd[1816]: time="2025-09-13T00:44:01.010473885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-crrvn,Uid:eb45447f-5bf3-45bc-8523-f45a74830305,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.010564 kubelet[3069]: E0913 00:44:01.010549 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.010588 kubelet[3069]: E0913 00:44:01.010572 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-crrvn" Sep 13 00:44:01.010588 kubelet[3069]: E0913 00:44:01.010583 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-crrvn" Sep 13 00:44:01.010626 kubelet[3069]: E0913 00:44:01.010602 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-crrvn_kube-system(eb45447f-5bf3-45bc-8523-f45a74830305)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-crrvn_kube-system(eb45447f-5bf3-45bc-8523-f45a74830305)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-crrvn" podUID="eb45447f-5bf3-45bc-8523-f45a74830305" Sep 13 00:44:01.010884 containerd[1816]: time="2025-09-13T00:44:01.010869150Z" level=error msg="Failed to destroy network for sandbox \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.010974 containerd[1816]: time="2025-09-13T00:44:01.010959652Z" level=error msg="Failed to destroy network for sandbox \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.011065 containerd[1816]: time="2025-09-13T00:44:01.011050133Z" level=error msg="encountered an error cleaning up failed sandbox \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.011105 containerd[1816]: time="2025-09-13T00:44:01.011077822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-677b4bc6cc-rc2zn,Uid:2406872a-e785-4950-8051-da7df7a2ca71,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.011144 containerd[1816]: time="2025-09-13T00:44:01.011105513Z" level=error msg="encountered an error cleaning up failed sandbox \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.011144 containerd[1816]: time="2025-09-13T00:44:01.011126911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sb6sx,Uid:7760760a-7878-4b4e-8517-c3f27b4755d1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.011188 kubelet[3069]: E0913 00:44:01.011156 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.011188 kubelet[3069]: E0913 00:44:01.011176 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-677b4bc6cc-rc2zn" Sep 13 00:44:01.011226 kubelet[3069]: E0913 00:44:01.011181 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.011226 kubelet[3069]: E0913 00:44:01.011188 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-677b4bc6cc-rc2zn" Sep 13 00:44:01.011226 kubelet[3069]: E0913 00:44:01.011200 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-sb6sx" Sep 13 00:44:01.011288 kubelet[3069]: E0913 00:44:01.011206 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-677b4bc6cc-rc2zn_calico-system(2406872a-e785-4950-8051-da7df7a2ca71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-677b4bc6cc-rc2zn_calico-system(2406872a-e785-4950-8051-da7df7a2ca71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-677b4bc6cc-rc2zn" podUID="2406872a-e785-4950-8051-da7df7a2ca71" Sep 13 00:44:01.011288 kubelet[3069]: E0913 00:44:01.011211 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-sb6sx" Sep 13 00:44:01.011288 kubelet[3069]: E0913 00:44:01.011227 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-sb6sx_kube-system(7760760a-7878-4b4e-8517-c3f27b4755d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-sb6sx_kube-system(7760760a-7878-4b4e-8517-c3f27b4755d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-sb6sx" podUID="7760760a-7878-4b4e-8517-c3f27b4755d1" Sep 13 00:44:01.013069 containerd[1816]: time="2025-09-13T00:44:01.013043309Z" level=error msg="Failed to destroy network for sandbox \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.013236 containerd[1816]: time="2025-09-13T00:44:01.013219844Z" level=error msg="encountered an error cleaning up failed sandbox \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.013279 containerd[1816]: time="2025-09-13T00:44:01.013256332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cdbb444-7d78s,Uid:b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.013352 kubelet[3069]: E0913 00:44:01.013337 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.013380 kubelet[3069]: E0913 00:44:01.013362 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cdbb444-7d78s" Sep 13 00:44:01.013380 kubelet[3069]: E0913 00:44:01.013373 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cdbb444-7d78s" Sep 13 00:44:01.013422 kubelet[3069]: E0913 00:44:01.013393 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cdbb444-7d78s_calico-apiserver(b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cdbb444-7d78s_calico-apiserver(b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cdbb444-7d78s" podUID="b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0" Sep 13 00:44:01.013580 containerd[1816]: time="2025-09-13T00:44:01.013566792Z" level=error msg="Failed to destroy network for sandbox \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.013706 containerd[1816]: time="2025-09-13T00:44:01.013694121Z" level=error msg="encountered an error cleaning up failed sandbox \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.013734 containerd[1816]: time="2025-09-13T00:44:01.013715082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cdbb444-r7n8m,Uid:1f143f97-3706-4d57-9a71-b20a0d04227b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.013816 kubelet[3069]: E0913 00:44:01.013802 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.013840 kubelet[3069]: E0913 00:44:01.013823 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cdbb444-r7n8m" Sep 13 00:44:01.013840 kubelet[3069]: E0913 00:44:01.013833 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cdbb444-r7n8m" Sep 13 00:44:01.013876 kubelet[3069]: E0913 00:44:01.013850 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cdbb444-r7n8m_calico-apiserver(1f143f97-3706-4d57-9a71-b20a0d04227b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cdbb444-r7n8m_calico-apiserver(1f143f97-3706-4d57-9a71-b20a0d04227b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cdbb444-r7n8m" podUID="1f143f97-3706-4d57-9a71-b20a0d04227b" Sep 13 00:44:01.521341 kubelet[3069]: I0913 00:44:01.521242 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Sep 13 00:44:01.522754 containerd[1816]: time="2025-09-13T00:44:01.522640966Z" level=info msg="StopPodSandbox for \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\"" Sep 13 00:44:01.523186 containerd[1816]: time="2025-09-13T00:44:01.523090596Z" level=info msg="Ensure that sandbox 1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe in task-service has been cleanup successfully" Sep 13 00:44:01.523556 kubelet[3069]: I0913 00:44:01.523470 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Sep 13 00:44:01.524537 containerd[1816]: time="2025-09-13T00:44:01.524476412Z" level=info msg="StopPodSandbox for \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\"" Sep 13 00:44:01.524932 containerd[1816]: time="2025-09-13T00:44:01.524884095Z" level=info msg="Ensure that sandbox 03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac in task-service has been cleanup successfully" Sep 13 00:44:01.529288 containerd[1816]: time="2025-09-13T00:44:01.529270731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:44:01.529363 kubelet[3069]: I0913 00:44:01.529287 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Sep 13 00:44:01.529704 containerd[1816]: time="2025-09-13T00:44:01.529682688Z" level=info msg="StopPodSandbox for \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\"" Sep 13 00:44:01.529918 containerd[1816]: time="2025-09-13T00:44:01.529903842Z" level=info msg="Ensure that sandbox b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25 in task-service has been cleanup successfully" Sep 13 00:44:01.530644 kubelet[3069]: I0913 00:44:01.530628 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Sep 13 00:44:01.531146 containerd[1816]: time="2025-09-13T00:44:01.531118174Z" level=info msg="StopPodSandbox for \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\"" Sep 13 00:44:01.531271 containerd[1816]: time="2025-09-13T00:44:01.531258211Z" level=info msg="Ensure that sandbox 38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33 in task-service has been cleanup successfully" Sep 13 00:44:01.531602 kubelet[3069]: I0913 00:44:01.531580 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Sep 13 00:44:01.531978 containerd[1816]: time="2025-09-13T00:44:01.531953147Z" level=info msg="StopPodSandbox for \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\"" Sep 13 00:44:01.532122 containerd[1816]: time="2025-09-13T00:44:01.532109551Z" level=info msg="Ensure that sandbox db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba in task-service has been cleanup successfully" Sep 13 00:44:01.532309 kubelet[3069]: I0913 00:44:01.532299 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Sep 13 00:44:01.533753 containerd[1816]: time="2025-09-13T00:44:01.533728261Z" level=info msg="StopPodSandbox for \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\"" Sep 13 00:44:01.533885 containerd[1816]: time="2025-09-13T00:44:01.533871723Z" level=info msg="Ensure that sandbox bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58 in task-service has been cleanup successfully" Sep 13 00:44:01.533920 kubelet[3069]: I0913 00:44:01.533903 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Sep 13 00:44:01.534455 containerd[1816]: time="2025-09-13T00:44:01.534431979Z" level=info msg="StopPodSandbox for \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\"" Sep 13 00:44:01.534592 containerd[1816]: time="2025-09-13T00:44:01.534578772Z" level=info msg="Ensure that sandbox 30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339 in task-service has been cleanup successfully" Sep 13 00:44:01.544581 containerd[1816]: time="2025-09-13T00:44:01.544536508Z" level=error msg="StopPodSandbox for \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\" failed" error="failed to destroy network for sandbox \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.544721 kubelet[3069]: E0913 00:44:01.544699 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Sep 13 00:44:01.544772 kubelet[3069]: E0913 00:44:01.544741 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe"} Sep 13 00:44:01.544794 containerd[1816]: time="2025-09-13T00:44:01.544770969Z" level=error msg="StopPodSandbox for \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\" failed" error="failed to destroy network for sandbox \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.544818 kubelet[3069]: E0913 00:44:01.544794 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fc068cc0-2f07-4c24-a603-b6967550f6d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:44:01.544869 kubelet[3069]: E0913 00:44:01.544816 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fc068cc0-2f07-4c24-a603-b6967550f6d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-bjldk" podUID="fc068cc0-2f07-4c24-a603-b6967550f6d8" Sep 13 00:44:01.544869 kubelet[3069]: E0913 00:44:01.544856 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Sep 13 00:44:01.544925 kubelet[3069]: E0913 00:44:01.544879 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac"} Sep 13 00:44:01.544925 kubelet[3069]: E0913 00:44:01.544896 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb45447f-5bf3-45bc-8523-f45a74830305\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:44:01.544925 kubelet[3069]: E0913 00:44:01.544907 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb45447f-5bf3-45bc-8523-f45a74830305\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-crrvn" podUID="eb45447f-5bf3-45bc-8523-f45a74830305" Sep 13 00:44:01.545590 containerd[1816]: time="2025-09-13T00:44:01.545567492Z" level=error msg="StopPodSandbox for \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\" failed" error="failed to destroy network for sandbox \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.545670 kubelet[3069]: E0913 00:44:01.545655 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Sep 13 00:44:01.545713 kubelet[3069]: E0913 00:44:01.545677 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25"} Sep 13 00:44:01.545713 kubelet[3069]: E0913 00:44:01.545702 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7760760a-7878-4b4e-8517-c3f27b4755d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:44:01.545775 kubelet[3069]: E0913 00:44:01.545720 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7760760a-7878-4b4e-8517-c3f27b4755d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-sb6sx" podUID="7760760a-7878-4b4e-8517-c3f27b4755d1" Sep 13 00:44:01.546453 containerd[1816]: time="2025-09-13T00:44:01.546423477Z" level=error msg="StopPodSandbox for \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\" failed" error="failed to destroy network for sandbox \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.546617 kubelet[3069]: E0913 00:44:01.546528 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Sep 13 00:44:01.546617 kubelet[3069]: E0913 00:44:01.546553 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33"} Sep 13 00:44:01.546617 kubelet[3069]: E0913 00:44:01.546576 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:44:01.546617 kubelet[3069]: E0913 00:44:01.546595 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d7d549755-rwftt" podUID="72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11" Sep 13 00:44:01.548379 containerd[1816]: time="2025-09-13T00:44:01.548297739Z" level=error msg="StopPodSandbox for \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\" failed" error="failed to destroy network for sandbox \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.548449 kubelet[3069]: E0913 00:44:01.548435 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Sep 13 00:44:01.548516 kubelet[3069]: E0913 00:44:01.548452 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba"} Sep 13 00:44:01.548516 kubelet[3069]: E0913 00:44:01.548481 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:44:01.548516 kubelet[3069]: E0913 00:44:01.548507 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cdbb444-7d78s" podUID="b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0" Sep 13 00:44:01.549058 containerd[1816]: time="2025-09-13T00:44:01.549037948Z" level=error msg="StopPodSandbox for \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\" failed" error="failed to destroy network for sandbox \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.549140 kubelet[3069]: E0913 00:44:01.549123 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Sep 13 00:44:01.549183 kubelet[3069]: E0913 00:44:01.549144 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58"} Sep 13 00:44:01.549183 kubelet[3069]: E0913 00:44:01.549166 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2406872a-e785-4950-8051-da7df7a2ca71\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:44:01.549257 kubelet[3069]: E0913 00:44:01.549192 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2406872a-e785-4950-8051-da7df7a2ca71\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-677b4bc6cc-rc2zn" podUID="2406872a-e785-4950-8051-da7df7a2ca71" Sep 13 00:44:01.550450 containerd[1816]: time="2025-09-13T00:44:01.550407840Z" level=error msg="StopPodSandbox for \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\" failed" error="failed to destroy network for sandbox \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:01.550507 kubelet[3069]: E0913 00:44:01.550493 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Sep 13 00:44:01.550543 kubelet[3069]: E0913 00:44:01.550510 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339"} Sep 13 00:44:01.550543 kubelet[3069]: E0913 00:44:01.550538 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1f143f97-3706-4d57-9a71-b20a0d04227b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:44:01.550593 kubelet[3069]: E0913 00:44:01.550548 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1f143f97-3706-4d57-9a71-b20a0d04227b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cdbb444-r7n8m" podUID="1f143f97-3706-4d57-9a71-b20a0d04227b" Sep 13 00:44:01.913668 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe-shm.mount: Deactivated successfully. Sep 13 00:44:01.913721 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac-shm.mount: Deactivated successfully. Sep 13 00:44:01.913756 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33-shm.mount: Deactivated successfully. Sep 13 00:44:01.913789 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25-shm.mount: Deactivated successfully. Sep 13 00:44:02.272312 kubelet[3069]: I0913 00:44:02.272088 3069 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:44:02.453587 systemd[1]: Created slice kubepods-besteffort-podc3e12566_608c_47e4_9de0_c2f38136e9e0.slice - libcontainer container kubepods-besteffort-podc3e12566_608c_47e4_9de0_c2f38136e9e0.slice. Sep 13 00:44:02.459216 containerd[1816]: time="2025-09-13T00:44:02.459130102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9v25t,Uid:c3e12566-608c-47e4-9de0-c2f38136e9e0,Namespace:calico-system,Attempt:0,}" Sep 13 00:44:02.485397 containerd[1816]: time="2025-09-13T00:44:02.485338232Z" level=error msg="Failed to destroy network for sandbox \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:02.485568 containerd[1816]: time="2025-09-13T00:44:02.485525761Z" level=error msg="encountered an error cleaning up failed sandbox \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:02.485568 containerd[1816]: time="2025-09-13T00:44:02.485557846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9v25t,Uid:c3e12566-608c-47e4-9de0-c2f38136e9e0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:02.485741 kubelet[3069]: E0913 00:44:02.485690 3069 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:02.485773 kubelet[3069]: E0913 00:44:02.485738 3069 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9v25t" Sep 13 00:44:02.485773 kubelet[3069]: E0913 00:44:02.485757 3069 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9v25t" Sep 13 00:44:02.485820 kubelet[3069]: E0913 00:44:02.485795 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9v25t_calico-system(c3e12566-608c-47e4-9de0-c2f38136e9e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9v25t_calico-system(c3e12566-608c-47e4-9de0-c2f38136e9e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9v25t" podUID="c3e12566-608c-47e4-9de0-c2f38136e9e0" Sep 13 00:44:02.486831 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc-shm.mount: Deactivated successfully. Sep 13 00:44:02.539412 kubelet[3069]: I0913 00:44:02.539190 3069 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Sep 13 00:44:02.540321 containerd[1816]: time="2025-09-13T00:44:02.540299636Z" level=info msg="StopPodSandbox for \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\"" Sep 13 00:44:02.540429 containerd[1816]: time="2025-09-13T00:44:02.540415519Z" level=info msg="Ensure that sandbox d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc in task-service has been cleanup successfully" Sep 13 00:44:02.554762 containerd[1816]: time="2025-09-13T00:44:02.554728234Z" level=error msg="StopPodSandbox for \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\" failed" error="failed to destroy network for sandbox \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:44:02.554869 kubelet[3069]: E0913 00:44:02.554846 3069 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Sep 13 00:44:02.554901 kubelet[3069]: E0913 00:44:02.554878 3069 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc"} Sep 13 00:44:02.554918 kubelet[3069]: E0913 00:44:02.554902 3069 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3e12566-608c-47e4-9de0-c2f38136e9e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:44:02.554958 kubelet[3069]: E0913 00:44:02.554915 3069 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3e12566-608c-47e4-9de0-c2f38136e9e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9v25t" podUID="c3e12566-608c-47e4-9de0-c2f38136e9e0" Sep 13 00:44:04.967415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3737744262.mount: Deactivated successfully. Sep 13 00:44:04.984512 containerd[1816]: time="2025-09-13T00:44:04.984463257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:04.984658 containerd[1816]: time="2025-09-13T00:44:04.984618609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:44:04.985040 containerd[1816]: time="2025-09-13T00:44:04.984998308Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:04.985956 containerd[1816]: time="2025-09-13T00:44:04.985913596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:04.986320 containerd[1816]: time="2025-09-13T00:44:04.986280103Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 3.456989977s" Sep 13 00:44:04.986320 containerd[1816]: time="2025-09-13T00:44:04.986294293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:44:04.989698 containerd[1816]: time="2025-09-13T00:44:04.989680753Z" level=info msg="CreateContainer within sandbox \"f8749c904004abd0c1482634e7f96b1a8c6b0b4d3007874819c3b724b1a65b60\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:44:04.995052 containerd[1816]: time="2025-09-13T00:44:04.995036221Z" level=info msg="CreateContainer within sandbox \"f8749c904004abd0c1482634e7f96b1a8c6b0b4d3007874819c3b724b1a65b60\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bad12ec02d21d80790149efb61229f5f61b17db8752dd107c25fdafc9f514702\"" Sep 13 00:44:04.995254 containerd[1816]: time="2025-09-13T00:44:04.995242074Z" level=info msg="StartContainer for \"bad12ec02d21d80790149efb61229f5f61b17db8752dd107c25fdafc9f514702\"" Sep 13 00:44:05.019322 systemd[1]: Started cri-containerd-bad12ec02d21d80790149efb61229f5f61b17db8752dd107c25fdafc9f514702.scope - libcontainer container bad12ec02d21d80790149efb61229f5f61b17db8752dd107c25fdafc9f514702. Sep 13 00:44:05.032392 containerd[1816]: time="2025-09-13T00:44:05.032368731Z" level=info msg="StartContainer for \"bad12ec02d21d80790149efb61229f5f61b17db8752dd107c25fdafc9f514702\" returns successfully" Sep 13 00:44:05.101215 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:44:05.101266 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:44:05.139052 containerd[1816]: time="2025-09-13T00:44:05.139027502Z" level=info msg="StopPodSandbox for \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\"" Sep 13 00:44:05.189514 containerd[1816]: 2025-09-13 00:44:05.172 [INFO][4731] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Sep 13 00:44:05.189514 containerd[1816]: 2025-09-13 00:44:05.172 [INFO][4731] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" iface="eth0" netns="/var/run/netns/cni-9ec8a109-4f4c-53da-6c48-4374aa0f8d57" Sep 13 00:44:05.189514 containerd[1816]: 2025-09-13 00:44:05.172 [INFO][4731] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" iface="eth0" netns="/var/run/netns/cni-9ec8a109-4f4c-53da-6c48-4374aa0f8d57" Sep 13 00:44:05.189514 containerd[1816]: 2025-09-13 00:44:05.172 [INFO][4731] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" iface="eth0" netns="/var/run/netns/cni-9ec8a109-4f4c-53da-6c48-4374aa0f8d57" Sep 13 00:44:05.189514 containerd[1816]: 2025-09-13 00:44:05.172 [INFO][4731] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Sep 13 00:44:05.189514 containerd[1816]: 2025-09-13 00:44:05.172 [INFO][4731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Sep 13 00:44:05.189514 containerd[1816]: 2025-09-13 00:44:05.183 [INFO][4761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" HandleID="k8s-pod-network.bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Workload="ci--4081.3.5--n--2af8d06a22-k8s-whisker--677b4bc6cc--rc2zn-eth0" Sep 13 00:44:05.189514 containerd[1816]: 2025-09-13 00:44:05.183 [INFO][4761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:05.189514 containerd[1816]: 2025-09-13 00:44:05.183 [INFO][4761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:05.189514 containerd[1816]: 2025-09-13 00:44:05.186 [WARNING][4761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" HandleID="k8s-pod-network.bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Workload="ci--4081.3.5--n--2af8d06a22-k8s-whisker--677b4bc6cc--rc2zn-eth0" Sep 13 00:44:05.189514 containerd[1816]: 2025-09-13 00:44:05.186 [INFO][4761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" HandleID="k8s-pod-network.bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Workload="ci--4081.3.5--n--2af8d06a22-k8s-whisker--677b4bc6cc--rc2zn-eth0" Sep 13 00:44:05.189514 containerd[1816]: 2025-09-13 00:44:05.187 [INFO][4761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:05.189514 containerd[1816]: 2025-09-13 00:44:05.188 [INFO][4731] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Sep 13 00:44:05.189819 containerd[1816]: time="2025-09-13T00:44:05.189591145Z" level=info msg="TearDown network for sandbox \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\" successfully" Sep 13 00:44:05.189819 containerd[1816]: time="2025-09-13T00:44:05.189605370Z" level=info msg="StopPodSandbox for \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\" returns successfully" Sep 13 00:44:05.365910 kubelet[3069]: I0913 00:44:05.365803 3069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2406872a-e785-4950-8051-da7df7a2ca71-whisker-ca-bundle\") pod \"2406872a-e785-4950-8051-da7df7a2ca71\" (UID: \"2406872a-e785-4950-8051-da7df7a2ca71\") " Sep 13 00:44:05.365910 kubelet[3069]: I0913 00:44:05.365923 3069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btvbf\" (UniqueName: \"kubernetes.io/projected/2406872a-e785-4950-8051-da7df7a2ca71-kube-api-access-btvbf\") pod \"2406872a-e785-4950-8051-da7df7a2ca71\" (UID: \"2406872a-e785-4950-8051-da7df7a2ca71\") " Sep 13 00:44:05.366861 kubelet[3069]: I0913 00:44:05.365990 3069 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2406872a-e785-4950-8051-da7df7a2ca71-whisker-backend-key-pair\") pod \"2406872a-e785-4950-8051-da7df7a2ca71\" (UID: \"2406872a-e785-4950-8051-da7df7a2ca71\") " Sep 13 00:44:05.366861 kubelet[3069]: I0913 00:44:05.366650 3069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2406872a-e785-4950-8051-da7df7a2ca71-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2406872a-e785-4950-8051-da7df7a2ca71" (UID: "2406872a-e785-4950-8051-da7df7a2ca71"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:44:05.371776 kubelet[3069]: I0913 00:44:05.371674 3069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2406872a-e785-4950-8051-da7df7a2ca71-kube-api-access-btvbf" (OuterVolumeSpecName: "kube-api-access-btvbf") pod "2406872a-e785-4950-8051-da7df7a2ca71" (UID: "2406872a-e785-4950-8051-da7df7a2ca71"). InnerVolumeSpecName "kube-api-access-btvbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:44:05.371968 kubelet[3069]: I0913 00:44:05.371791 3069 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2406872a-e785-4950-8051-da7df7a2ca71-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2406872a-e785-4950-8051-da7df7a2ca71" (UID: "2406872a-e785-4950-8051-da7df7a2ca71"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:44:05.454554 systemd[1]: Removed slice kubepods-besteffort-pod2406872a_e785_4950_8051_da7df7a2ca71.slice - libcontainer container kubepods-besteffort-pod2406872a_e785_4950_8051_da7df7a2ca71.slice. Sep 13 00:44:05.466595 kubelet[3069]: I0913 00:44:05.466488 3069 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btvbf\" (UniqueName: \"kubernetes.io/projected/2406872a-e785-4950-8051-da7df7a2ca71-kube-api-access-btvbf\") on node \"ci-4081.3.5-n-2af8d06a22\" DevicePath \"\"" Sep 13 00:44:05.466595 kubelet[3069]: I0913 00:44:05.466551 3069 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2406872a-e785-4950-8051-da7df7a2ca71-whisker-backend-key-pair\") on node \"ci-4081.3.5-n-2af8d06a22\" DevicePath \"\"" Sep 13 00:44:05.466595 kubelet[3069]: I0913 00:44:05.466584 3069 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2406872a-e785-4950-8051-da7df7a2ca71-whisker-ca-bundle\") on node \"ci-4081.3.5-n-2af8d06a22\" DevicePath \"\"" Sep 13 00:44:05.611273 kubelet[3069]: I0913 00:44:05.611182 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4tmh9" podStartSLOduration=1.9080029920000001 podStartE2EDuration="13.611152341s" podCreationTimestamp="2025-09-13 00:43:52 +0000 UTC" firstStartedPulling="2025-09-13 00:43:53.283502612 +0000 UTC m=+17.891004682" lastFinishedPulling="2025-09-13 00:44:04.986651961 +0000 UTC m=+29.594154031" observedRunningTime="2025-09-13 00:44:05.610990145 +0000 UTC m=+30.218492243" watchObservedRunningTime="2025-09-13 00:44:05.611152341 +0000 UTC m=+30.218654441" Sep 13 00:44:05.617428 systemd[1]: Created slice kubepods-besteffort-podfcef6e33_87da_4eaf_9da8_6f10bcb16928.slice - libcontainer container kubepods-besteffort-podfcef6e33_87da_4eaf_9da8_6f10bcb16928.slice. Sep 13 00:44:05.768835 kubelet[3069]: I0913 00:44:05.768714 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fcef6e33-87da-4eaf-9da8-6f10bcb16928-whisker-backend-key-pair\") pod \"whisker-dd59b8ffb-w59zj\" (UID: \"fcef6e33-87da-4eaf-9da8-6f10bcb16928\") " pod="calico-system/whisker-dd59b8ffb-w59zj" Sep 13 00:44:05.768835 kubelet[3069]: I0913 00:44:05.768822 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcef6e33-87da-4eaf-9da8-6f10bcb16928-whisker-ca-bundle\") pod \"whisker-dd59b8ffb-w59zj\" (UID: \"fcef6e33-87da-4eaf-9da8-6f10bcb16928\") " pod="calico-system/whisker-dd59b8ffb-w59zj" Sep 13 00:44:05.769318 kubelet[3069]: I0913 00:44:05.768983 3069 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqv48\" (UniqueName: \"kubernetes.io/projected/fcef6e33-87da-4eaf-9da8-6f10bcb16928-kube-api-access-xqv48\") pod \"whisker-dd59b8ffb-w59zj\" (UID: \"fcef6e33-87da-4eaf-9da8-6f10bcb16928\") " pod="calico-system/whisker-dd59b8ffb-w59zj" Sep 13 00:44:05.921553 containerd[1816]: time="2025-09-13T00:44:05.921312722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dd59b8ffb-w59zj,Uid:fcef6e33-87da-4eaf-9da8-6f10bcb16928,Namespace:calico-system,Attempt:0,}" Sep 13 00:44:05.970731 systemd[1]: run-netns-cni\x2d9ec8a109\x2d4f4c\x2d53da\x2d6c48\x2d4374aa0f8d57.mount: Deactivated successfully. Sep 13 00:44:05.970793 systemd[1]: var-lib-kubelet-pods-2406872a\x2de785\x2d4950\x2d8051\x2dda7df7a2ca71-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbtvbf.mount: Deactivated successfully. Sep 13 00:44:05.970841 systemd[1]: var-lib-kubelet-pods-2406872a\x2de785\x2d4950\x2d8051\x2dda7df7a2ca71-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:44:05.984947 systemd-networkd[1605]: cali4fea259bb97: Link UP Sep 13 00:44:05.985152 systemd-networkd[1605]: cali4fea259bb97: Gained carrier Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.937 [INFO][4791] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.944 [INFO][4791] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--2af8d06a22-k8s-whisker--dd59b8ffb--w59zj-eth0 whisker-dd59b8ffb- calico-system fcef6e33-87da-4eaf-9da8-6f10bcb16928 868 0 2025-09-13 00:44:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:dd59b8ffb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-n-2af8d06a22 whisker-dd59b8ffb-w59zj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4fea259bb97 [] [] }} ContainerID="3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" Namespace="calico-system" Pod="whisker-dd59b8ffb-w59zj" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-whisker--dd59b8ffb--w59zj-" Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.944 [INFO][4791] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" Namespace="calico-system" Pod="whisker-dd59b8ffb-w59zj" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-whisker--dd59b8ffb--w59zj-eth0" Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.958 [INFO][4814] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" HandleID="k8s-pod-network.3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" Workload="ci--4081.3.5--n--2af8d06a22-k8s-whisker--dd59b8ffb--w59zj-eth0" Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.958 [INFO][4814] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" HandleID="k8s-pod-network.3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" Workload="ci--4081.3.5--n--2af8d06a22-k8s-whisker--dd59b8ffb--w59zj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139510), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-2af8d06a22", "pod":"whisker-dd59b8ffb-w59zj", "timestamp":"2025-09-13 00:44:05.958063856 +0000 UTC"}, Hostname:"ci-4081.3.5-n-2af8d06a22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.958 [INFO][4814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.958 [INFO][4814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.958 [INFO][4814] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-2af8d06a22' Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.963 [INFO][4814] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.967 [INFO][4814] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.970 [INFO][4814] ipam/ipam.go 511: Trying affinity for 192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.971 [INFO][4814] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.972 [INFO][4814] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.972 [INFO][4814] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.973 [INFO][4814] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05 Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.976 [INFO][4814] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.979 [INFO][4814] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.193/26] block=192.168.77.192/26 handle="k8s-pod-network.3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.979 [INFO][4814] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.193/26] handle="k8s-pod-network.3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.979 [INFO][4814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:05.991984 containerd[1816]: 2025-09-13 00:44:05.979 [INFO][4814] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.193/26] IPv6=[] ContainerID="3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" HandleID="k8s-pod-network.3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" Workload="ci--4081.3.5--n--2af8d06a22-k8s-whisker--dd59b8ffb--w59zj-eth0" Sep 13 00:44:05.992639 containerd[1816]: 2025-09-13 00:44:05.980 [INFO][4791] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" Namespace="calico-system" Pod="whisker-dd59b8ffb-w59zj" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-whisker--dd59b8ffb--w59zj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-whisker--dd59b8ffb--w59zj-eth0", GenerateName:"whisker-dd59b8ffb-", Namespace:"calico-system", SelfLink:"", UID:"fcef6e33-87da-4eaf-9da8-6f10bcb16928", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 44, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dd59b8ffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"", Pod:"whisker-dd59b8ffb-w59zj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.77.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4fea259bb97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:05.992639 containerd[1816]: 2025-09-13 00:44:05.980 [INFO][4791] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.193/32] ContainerID="3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" Namespace="calico-system" Pod="whisker-dd59b8ffb-w59zj" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-whisker--dd59b8ffb--w59zj-eth0" Sep 13 00:44:05.992639 containerd[1816]: 2025-09-13 00:44:05.980 [INFO][4791] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fea259bb97 ContainerID="3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" Namespace="calico-system" Pod="whisker-dd59b8ffb-w59zj" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-whisker--dd59b8ffb--w59zj-eth0" Sep 13 00:44:05.992639 containerd[1816]: 2025-09-13 00:44:05.985 [INFO][4791] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" Namespace="calico-system" Pod="whisker-dd59b8ffb-w59zj" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-whisker--dd59b8ffb--w59zj-eth0" Sep 13 00:44:05.992639 containerd[1816]: 2025-09-13 00:44:05.985 [INFO][4791] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" Namespace="calico-system" Pod="whisker-dd59b8ffb-w59zj" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-whisker--dd59b8ffb--w59zj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-whisker--dd59b8ffb--w59zj-eth0", GenerateName:"whisker-dd59b8ffb-", Namespace:"calico-system", SelfLink:"", UID:"fcef6e33-87da-4eaf-9da8-6f10bcb16928", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 44, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dd59b8ffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05", Pod:"whisker-dd59b8ffb-w59zj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.77.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4fea259bb97", MAC:"fe:2a:9f:11:d8:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:05.992639 containerd[1816]: 2025-09-13 00:44:05.991 [INFO][4791] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05" Namespace="calico-system" Pod="whisker-dd59b8ffb-w59zj" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-whisker--dd59b8ffb--w59zj-eth0" Sep 13 00:44:06.000411 containerd[1816]: time="2025-09-13T00:44:06.000345921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:44:06.000411 containerd[1816]: time="2025-09-13T00:44:06.000371819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:44:06.000411 containerd[1816]: time="2025-09-13T00:44:06.000378864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:06.000516 containerd[1816]: time="2025-09-13T00:44:06.000445583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:06.016199 systemd[1]: Started cri-containerd-3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05.scope - libcontainer container 3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05. Sep 13 00:44:06.041296 containerd[1816]: time="2025-09-13T00:44:06.041249007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dd59b8ffb-w59zj,Uid:fcef6e33-87da-4eaf-9da8-6f10bcb16928,Namespace:calico-system,Attempt:0,} returns sandbox id \"3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05\"" Sep 13 00:44:06.041997 containerd[1816]: time="2025-09-13T00:44:06.041984350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:44:06.290098 kernel: bpftool[5037]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:44:06.458913 systemd-networkd[1605]: vxlan.calico: Link UP Sep 13 00:44:06.458916 systemd-networkd[1605]: vxlan.calico: Gained carrier Sep 13 00:44:07.433673 containerd[1816]: time="2025-09-13T00:44:07.433620796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:07.433903 containerd[1816]: time="2025-09-13T00:44:07.433799563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:44:07.434172 containerd[1816]: time="2025-09-13T00:44:07.434157126Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:07.435369 containerd[1816]: time="2025-09-13T00:44:07.435356126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:07.435871 containerd[1816]: time="2025-09-13T00:44:07.435857939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.393855062s" Sep 13 00:44:07.435899 containerd[1816]: time="2025-09-13T00:44:07.435875498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:44:07.437025 containerd[1816]: time="2025-09-13T00:44:07.437003283Z" level=info msg="CreateContainer within sandbox \"3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:44:07.439820 kubelet[3069]: I0913 00:44:07.439803 3069 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2406872a-e785-4950-8051-da7df7a2ca71" path="/var/lib/kubelet/pods/2406872a-e785-4950-8051-da7df7a2ca71/volumes" Sep 13 00:44:07.441524 containerd[1816]: time="2025-09-13T00:44:07.441507540Z" level=info msg="CreateContainer within sandbox \"3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c20e9063987ebd279c5fa62c3bb1419a7242fd4be01ea99a8de599d853cd8b47\"" Sep 13 00:44:07.441702 containerd[1816]: time="2025-09-13T00:44:07.441691849Z" level=info msg="StartContainer for \"c20e9063987ebd279c5fa62c3bb1419a7242fd4be01ea99a8de599d853cd8b47\"" Sep 13 00:44:07.461543 systemd[1]: Started cri-containerd-c20e9063987ebd279c5fa62c3bb1419a7242fd4be01ea99a8de599d853cd8b47.scope - libcontainer container c20e9063987ebd279c5fa62c3bb1419a7242fd4be01ea99a8de599d853cd8b47. Sep 13 00:44:07.480329 systemd-networkd[1605]: cali4fea259bb97: Gained IPv6LL Sep 13 00:44:07.536312 containerd[1816]: time="2025-09-13T00:44:07.536263080Z" level=info msg="StartContainer for \"c20e9063987ebd279c5fa62c3bb1419a7242fd4be01ea99a8de599d853cd8b47\" returns successfully" Sep 13 00:44:07.536828 containerd[1816]: time="2025-09-13T00:44:07.536815806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:44:07.545069 systemd-networkd[1605]: vxlan.calico: Gained IPv6LL Sep 13 00:44:09.852904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3751720409.mount: Deactivated successfully. Sep 13 00:44:09.857157 containerd[1816]: time="2025-09-13T00:44:09.857134834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:09.857343 containerd[1816]: time="2025-09-13T00:44:09.857331001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:44:09.857631 containerd[1816]: time="2025-09-13T00:44:09.857620696Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:09.859130 containerd[1816]: time="2025-09-13T00:44:09.859090309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:09.859453 containerd[1816]: time="2025-09-13T00:44:09.859411785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.322578735s" Sep 13 00:44:09.859453 containerd[1816]: time="2025-09-13T00:44:09.859429337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:44:09.860418 containerd[1816]: time="2025-09-13T00:44:09.860402923Z" level=info msg="CreateContainer within sandbox \"3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:44:09.864107 containerd[1816]: time="2025-09-13T00:44:09.864090593Z" level=info msg="CreateContainer within sandbox \"3177105f23b508417f8e0fa4c34d8594d1635a22a6935094cd522e2bfba34b05\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a14cde130520b0827943dd68ec2bb8d5be441c639bf82a880ca8da972310490d\"" Sep 13 00:44:09.864333 containerd[1816]: time="2025-09-13T00:44:09.864321818Z" level=info msg="StartContainer for \"a14cde130520b0827943dd68ec2bb8d5be441c639bf82a880ca8da972310490d\"" Sep 13 00:44:09.893316 systemd[1]: Started cri-containerd-a14cde130520b0827943dd68ec2bb8d5be441c639bf82a880ca8da972310490d.scope - libcontainer container a14cde130520b0827943dd68ec2bb8d5be441c639bf82a880ca8da972310490d. Sep 13 00:44:09.918519 containerd[1816]: time="2025-09-13T00:44:09.918496727Z" level=info msg="StartContainer for \"a14cde130520b0827943dd68ec2bb8d5be441c639bf82a880ca8da972310490d\" returns successfully" Sep 13 00:44:10.580209 kubelet[3069]: I0913 00:44:10.580148 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-dd59b8ffb-w59zj" podStartSLOduration=1.7621636920000001 podStartE2EDuration="5.580136699s" podCreationTimestamp="2025-09-13 00:44:05 +0000 UTC" firstStartedPulling="2025-09-13 00:44:06.041849656 +0000 UTC m=+30.649351728" lastFinishedPulling="2025-09-13 00:44:09.859822665 +0000 UTC m=+34.467324735" observedRunningTime="2025-09-13 00:44:10.579766393 +0000 UTC m=+35.187268464" watchObservedRunningTime="2025-09-13 00:44:10.580136699 +0000 UTC m=+35.187638766" Sep 13 00:44:12.439773 containerd[1816]: time="2025-09-13T00:44:12.439646336Z" level=info msg="StopPodSandbox for \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\"" Sep 13 00:44:12.586776 containerd[1816]: 2025-09-13 00:44:12.530 [INFO][5337] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Sep 13 00:44:12.586776 containerd[1816]: 2025-09-13 00:44:12.530 [INFO][5337] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" iface="eth0" netns="/var/run/netns/cni-1a755a2d-7642-428f-388d-da27bbac9129" Sep 13 00:44:12.586776 containerd[1816]: 2025-09-13 00:44:12.530 [INFO][5337] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" iface="eth0" netns="/var/run/netns/cni-1a755a2d-7642-428f-388d-da27bbac9129" Sep 13 00:44:12.586776 containerd[1816]: 2025-09-13 00:44:12.531 [INFO][5337] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" iface="eth0" netns="/var/run/netns/cni-1a755a2d-7642-428f-388d-da27bbac9129" Sep 13 00:44:12.586776 containerd[1816]: 2025-09-13 00:44:12.531 [INFO][5337] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Sep 13 00:44:12.586776 containerd[1816]: 2025-09-13 00:44:12.531 [INFO][5337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Sep 13 00:44:12.586776 containerd[1816]: 2025-09-13 00:44:12.567 [INFO][5356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" HandleID="k8s-pod-network.03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:12.586776 containerd[1816]: 2025-09-13 00:44:12.567 [INFO][5356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:12.586776 containerd[1816]: 2025-09-13 00:44:12.567 [INFO][5356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:12.586776 containerd[1816]: 2025-09-13 00:44:12.579 [WARNING][5356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" HandleID="k8s-pod-network.03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:12.586776 containerd[1816]: 2025-09-13 00:44:12.579 [INFO][5356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" HandleID="k8s-pod-network.03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:12.586776 containerd[1816]: 2025-09-13 00:44:12.581 [INFO][5356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:12.586776 containerd[1816]: 2025-09-13 00:44:12.584 [INFO][5337] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Sep 13 00:44:12.587803 containerd[1816]: time="2025-09-13T00:44:12.586975075Z" level=info msg="TearDown network for sandbox \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\" successfully" Sep 13 00:44:12.587803 containerd[1816]: time="2025-09-13T00:44:12.587054567Z" level=info msg="StopPodSandbox for \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\" returns successfully" Sep 13 00:44:12.588118 containerd[1816]: time="2025-09-13T00:44:12.588029800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-crrvn,Uid:eb45447f-5bf3-45bc-8523-f45a74830305,Namespace:kube-system,Attempt:1,}" Sep 13 00:44:12.590822 systemd[1]: run-netns-cni\x2d1a755a2d\x2d7642\x2d428f\x2d388d\x2dda27bbac9129.mount: Deactivated successfully. Sep 13 00:44:12.649080 systemd-networkd[1605]: calia619552ec4b: Link UP Sep 13 00:44:12.649224 systemd-networkd[1605]: calia619552ec4b: Gained carrier Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.607 [INFO][5375] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0 coredns-7c65d6cfc9- kube-system eb45447f-5bf3-45bc-8523-f45a74830305 903 0 2025-09-13 00:43:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-2af8d06a22 coredns-7c65d6cfc9-crrvn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia619552ec4b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-crrvn" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-" Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.607 [INFO][5375] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-crrvn" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.619 [INFO][5397] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" HandleID="k8s-pod-network.69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.619 [INFO][5397] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" HandleID="k8s-pod-network.69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a74b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-2af8d06a22", "pod":"coredns-7c65d6cfc9-crrvn", "timestamp":"2025-09-13 00:44:12.619108143 +0000 UTC"}, Hostname:"ci-4081.3.5-n-2af8d06a22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.619 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.619 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.619 [INFO][5397] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-2af8d06a22' Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.623 [INFO][5397] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.625 [INFO][5397] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.627 [INFO][5397] ipam/ipam.go 511: Trying affinity for 192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.627 [INFO][5397] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.628 [INFO][5397] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.629 [INFO][5397] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.629 [INFO][5397] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.644 [INFO][5397] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.647 [INFO][5397] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.194/26] block=192.168.77.192/26 handle="k8s-pod-network.69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.647 [INFO][5397] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.194/26] handle="k8s-pod-network.69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.647 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:12.656252 containerd[1816]: 2025-09-13 00:44:12.647 [INFO][5397] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.194/26] IPv6=[] ContainerID="69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" HandleID="k8s-pod-network.69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:12.656779 containerd[1816]: 2025-09-13 00:44:12.648 [INFO][5375] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-crrvn" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb45447f-5bf3-45bc-8523-f45a74830305", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"", Pod:"coredns-7c65d6cfc9-crrvn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia619552ec4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:12.656779 containerd[1816]: 2025-09-13 00:44:12.648 [INFO][5375] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.194/32] ContainerID="69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-crrvn" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:12.656779 containerd[1816]: 2025-09-13 00:44:12.648 [INFO][5375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia619552ec4b ContainerID="69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-crrvn" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:12.656779 containerd[1816]: 2025-09-13 00:44:12.649 [INFO][5375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-crrvn" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:12.656779 containerd[1816]: 2025-09-13 00:44:12.649 [INFO][5375] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-crrvn" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb45447f-5bf3-45bc-8523-f45a74830305", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba", Pod:"coredns-7c65d6cfc9-crrvn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia619552ec4b", MAC:"42:bc:fc:92:6d:41", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:12.656779 containerd[1816]: 2025-09-13 00:44:12.655 [INFO][5375] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba" Namespace="kube-system" Pod="coredns-7c65d6cfc9-crrvn" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:12.665054 containerd[1816]: time="2025-09-13T00:44:12.664991369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:44:12.665054 containerd[1816]: time="2025-09-13T00:44:12.665026707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:44:12.665054 containerd[1816]: time="2025-09-13T00:44:12.665033968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:12.665170 containerd[1816]: time="2025-09-13T00:44:12.665073794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:12.690190 systemd[1]: Started cri-containerd-69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba.scope - libcontainer container 69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba. Sep 13 00:44:12.720876 containerd[1816]: time="2025-09-13T00:44:12.720848855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-crrvn,Uid:eb45447f-5bf3-45bc-8523-f45a74830305,Namespace:kube-system,Attempt:1,} returns sandbox id \"69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba\"" Sep 13 00:44:12.722431 containerd[1816]: time="2025-09-13T00:44:12.722386216Z" level=info msg="CreateContainer within sandbox \"69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:44:12.726864 containerd[1816]: time="2025-09-13T00:44:12.726823085Z" level=info msg="CreateContainer within sandbox \"69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6f0bcd30a3eda928ab76ed6e4011695ec2334303a4f5172023659284c8f3fd00\"" Sep 13 00:44:12.727058 containerd[1816]: time="2025-09-13T00:44:12.727008009Z" level=info msg="StartContainer for \"6f0bcd30a3eda928ab76ed6e4011695ec2334303a4f5172023659284c8f3fd00\"" Sep 13 00:44:12.751141 systemd[1]: Started cri-containerd-6f0bcd30a3eda928ab76ed6e4011695ec2334303a4f5172023659284c8f3fd00.scope - libcontainer container 6f0bcd30a3eda928ab76ed6e4011695ec2334303a4f5172023659284c8f3fd00. Sep 13 00:44:12.764019 containerd[1816]: time="2025-09-13T00:44:12.763991619Z" level=info msg="StartContainer for \"6f0bcd30a3eda928ab76ed6e4011695ec2334303a4f5172023659284c8f3fd00\" returns successfully" Sep 13 00:44:13.440475 containerd[1816]: time="2025-09-13T00:44:13.440354097Z" level=info msg="StopPodSandbox for \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\"" Sep 13 00:44:13.441475 containerd[1816]: time="2025-09-13T00:44:13.440368007Z" level=info msg="StopPodSandbox for \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\"" Sep 13 00:44:13.482296 containerd[1816]: 2025-09-13 00:44:13.465 [INFO][5540] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Sep 13 00:44:13.482296 containerd[1816]: 2025-09-13 00:44:13.465 [INFO][5540] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" iface="eth0" netns="/var/run/netns/cni-b49cdcd9-b5f6-9d90-43ea-b9fcc7f37a14" Sep 13 00:44:13.482296 containerd[1816]: 2025-09-13 00:44:13.465 [INFO][5540] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" iface="eth0" netns="/var/run/netns/cni-b49cdcd9-b5f6-9d90-43ea-b9fcc7f37a14" Sep 13 00:44:13.482296 containerd[1816]: 2025-09-13 00:44:13.465 [INFO][5540] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" iface="eth0" netns="/var/run/netns/cni-b49cdcd9-b5f6-9d90-43ea-b9fcc7f37a14" Sep 13 00:44:13.482296 containerd[1816]: 2025-09-13 00:44:13.465 [INFO][5540] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Sep 13 00:44:13.482296 containerd[1816]: 2025-09-13 00:44:13.466 [INFO][5540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Sep 13 00:44:13.482296 containerd[1816]: 2025-09-13 00:44:13.476 [INFO][5571] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" HandleID="k8s-pod-network.1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Workload="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:13.482296 containerd[1816]: 2025-09-13 00:44:13.476 [INFO][5571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:13.482296 containerd[1816]: 2025-09-13 00:44:13.476 [INFO][5571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:13.482296 containerd[1816]: 2025-09-13 00:44:13.480 [WARNING][5571] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" HandleID="k8s-pod-network.1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Workload="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:13.482296 containerd[1816]: 2025-09-13 00:44:13.480 [INFO][5571] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" HandleID="k8s-pod-network.1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Workload="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:13.482296 containerd[1816]: 2025-09-13 00:44:13.480 [INFO][5571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:13.482296 containerd[1816]: 2025-09-13 00:44:13.481 [INFO][5540] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Sep 13 00:44:13.482569 containerd[1816]: time="2025-09-13T00:44:13.482333877Z" level=info msg="TearDown network for sandbox \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\" successfully" Sep 13 00:44:13.482569 containerd[1816]: time="2025-09-13T00:44:13.482349813Z" level=info msg="StopPodSandbox for \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\" returns successfully" Sep 13 00:44:13.482768 containerd[1816]: time="2025-09-13T00:44:13.482753136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bjldk,Uid:fc068cc0-2f07-4c24-a603-b6967550f6d8,Namespace:calico-system,Attempt:1,}" Sep 13 00:44:13.487518 containerd[1816]: 2025-09-13 00:44:13.465 [INFO][5539] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Sep 13 00:44:13.487518 containerd[1816]: 2025-09-13 00:44:13.465 [INFO][5539] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" iface="eth0" netns="/var/run/netns/cni-0592aeae-4658-c81c-4e59-ce33407a7dac" Sep 13 00:44:13.487518 containerd[1816]: 2025-09-13 00:44:13.466 [INFO][5539] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" iface="eth0" netns="/var/run/netns/cni-0592aeae-4658-c81c-4e59-ce33407a7dac" Sep 13 00:44:13.487518 containerd[1816]: 2025-09-13 00:44:13.466 [INFO][5539] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" iface="eth0" netns="/var/run/netns/cni-0592aeae-4658-c81c-4e59-ce33407a7dac" Sep 13 00:44:13.487518 containerd[1816]: 2025-09-13 00:44:13.466 [INFO][5539] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Sep 13 00:44:13.487518 containerd[1816]: 2025-09-13 00:44:13.466 [INFO][5539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Sep 13 00:44:13.487518 containerd[1816]: 2025-09-13 00:44:13.476 [INFO][5573] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" HandleID="k8s-pod-network.b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:13.487518 containerd[1816]: 2025-09-13 00:44:13.476 [INFO][5573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:13.487518 containerd[1816]: 2025-09-13 00:44:13.480 [INFO][5573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:13.487518 containerd[1816]: 2025-09-13 00:44:13.484 [WARNING][5573] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" HandleID="k8s-pod-network.b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:13.487518 containerd[1816]: 2025-09-13 00:44:13.484 [INFO][5573] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" HandleID="k8s-pod-network.b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:13.487518 containerd[1816]: 2025-09-13 00:44:13.485 [INFO][5573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:13.487518 containerd[1816]: 2025-09-13 00:44:13.486 [INFO][5539] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Sep 13 00:44:13.487843 containerd[1816]: time="2025-09-13T00:44:13.487574087Z" level=info msg="TearDown network for sandbox \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\" successfully" Sep 13 00:44:13.487843 containerd[1816]: time="2025-09-13T00:44:13.487589079Z" level=info msg="StopPodSandbox for \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\" returns successfully" Sep 13 00:44:13.487984 containerd[1816]: time="2025-09-13T00:44:13.487973364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sb6sx,Uid:7760760a-7878-4b4e-8517-c3f27b4755d1,Namespace:kube-system,Attempt:1,}" Sep 13 00:44:13.532553 systemd-networkd[1605]: cali76b825a70a1: Link UP Sep 13 00:44:13.532682 systemd-networkd[1605]: cali76b825a70a1: Gained carrier Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.504 [INFO][5602] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0 goldmane-7988f88666- calico-system fc068cc0-2f07-4c24-a603-b6967550f6d8 915 0 2025-09-13 00:43:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-n-2af8d06a22 goldmane-7988f88666-bjldk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali76b825a70a1 [] [] }} ContainerID="91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" Namespace="calico-system" Pod="goldmane-7988f88666-bjldk" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-" Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.504 [INFO][5602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" Namespace="calico-system" Pod="goldmane-7988f88666-bjldk" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.515 [INFO][5649] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" HandleID="k8s-pod-network.91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" Workload="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.516 [INFO][5649] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" HandleID="k8s-pod-network.91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" Workload="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000619f50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-2af8d06a22", "pod":"goldmane-7988f88666-bjldk", "timestamp":"2025-09-13 00:44:13.515946198 +0000 UTC"}, Hostname:"ci-4081.3.5-n-2af8d06a22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.516 [INFO][5649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.516 [INFO][5649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.516 [INFO][5649] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-2af8d06a22' Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.519 [INFO][5649] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.522 [INFO][5649] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.524 [INFO][5649] ipam/ipam.go 511: Trying affinity for 192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.524 [INFO][5649] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.525 [INFO][5649] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.525 [INFO][5649] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.526 [INFO][5649] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898 Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.527 [INFO][5649] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.530 [INFO][5649] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.195/26] block=192.168.77.192/26 handle="k8s-pod-network.91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.530 [INFO][5649] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.195/26] handle="k8s-pod-network.91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.530 [INFO][5649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:13.538580 containerd[1816]: 2025-09-13 00:44:13.530 [INFO][5649] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.195/26] IPv6=[] ContainerID="91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" HandleID="k8s-pod-network.91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" Workload="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:13.538982 containerd[1816]: 2025-09-13 00:44:13.531 [INFO][5602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" Namespace="calico-system" Pod="goldmane-7988f88666-bjldk" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"fc068cc0-2f07-4c24-a603-b6967550f6d8", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"", Pod:"goldmane-7988f88666-bjldk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.77.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76b825a70a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:13.538982 containerd[1816]: 2025-09-13 00:44:13.531 [INFO][5602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.195/32] ContainerID="91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" Namespace="calico-system" Pod="goldmane-7988f88666-bjldk" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:13.538982 containerd[1816]: 2025-09-13 00:44:13.531 [INFO][5602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76b825a70a1 ContainerID="91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" Namespace="calico-system" Pod="goldmane-7988f88666-bjldk" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:13.538982 containerd[1816]: 2025-09-13 00:44:13.532 [INFO][5602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" Namespace="calico-system" Pod="goldmane-7988f88666-bjldk" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:13.538982 containerd[1816]: 2025-09-13 00:44:13.532 [INFO][5602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" Namespace="calico-system" Pod="goldmane-7988f88666-bjldk" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"fc068cc0-2f07-4c24-a603-b6967550f6d8", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898", Pod:"goldmane-7988f88666-bjldk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.77.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76b825a70a1", MAC:"42:7b:b4:9f:8c:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:13.538982 containerd[1816]: 2025-09-13 00:44:13.537 [INFO][5602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898" Namespace="calico-system" Pod="goldmane-7988f88666-bjldk" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:13.546497 containerd[1816]: time="2025-09-13T00:44:13.546421904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:44:13.546665 containerd[1816]: time="2025-09-13T00:44:13.546628001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:44:13.546665 containerd[1816]: time="2025-09-13T00:44:13.546637732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:13.546706 containerd[1816]: time="2025-09-13T00:44:13.546677087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:13.561284 systemd[1]: Started cri-containerd-91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898.scope - libcontainer container 91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898. Sep 13 00:44:13.580138 kubelet[3069]: I0913 00:44:13.580084 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-crrvn" podStartSLOduration=31.58006907 podStartE2EDuration="31.58006907s" podCreationTimestamp="2025-09-13 00:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:44:13.579821167 +0000 UTC m=+38.187323238" watchObservedRunningTime="2025-09-13 00:44:13.58006907 +0000 UTC m=+38.187571138" Sep 13 00:44:13.585252 containerd[1816]: time="2025-09-13T00:44:13.585220689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bjldk,Uid:fc068cc0-2f07-4c24-a603-b6967550f6d8,Namespace:calico-system,Attempt:1,} returns sandbox id \"91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898\"" Sep 13 00:44:13.586250 containerd[1816]: time="2025-09-13T00:44:13.586230834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:44:13.591395 systemd[1]: run-netns-cni\x2db49cdcd9\x2db5f6\x2d9d90\x2d43ea\x2db9fcc7f37a14.mount: Deactivated successfully. Sep 13 00:44:13.591456 systemd[1]: run-netns-cni\x2d0592aeae\x2d4658\x2dc81c\x2d4e59\x2dce33407a7dac.mount: Deactivated successfully. Sep 13 00:44:13.685875 systemd-networkd[1605]: calie6d33eacf72: Link UP Sep 13 00:44:13.686100 systemd-networkd[1605]: calie6d33eacf72: Gained carrier Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.506 [INFO][5614] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0 coredns-7c65d6cfc9- kube-system 7760760a-7878-4b4e-8517-c3f27b4755d1 916 0 2025-09-13 00:43:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-2af8d06a22 coredns-7c65d6cfc9-sb6sx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie6d33eacf72 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sb6sx" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-" Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.506 [INFO][5614] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sb6sx" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.517 [INFO][5655] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" HandleID="k8s-pod-network.6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.517 [INFO][5655] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" HandleID="k8s-pod-network.6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a86f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-2af8d06a22", "pod":"coredns-7c65d6cfc9-sb6sx", "timestamp":"2025-09-13 00:44:13.51780987 +0000 UTC"}, Hostname:"ci-4081.3.5-n-2af8d06a22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.517 [INFO][5655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.530 [INFO][5655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.530 [INFO][5655] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-2af8d06a22' Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.621 [INFO][5655] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.630 [INFO][5655] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.647 [INFO][5655] ipam/ipam.go 511: Trying affinity for 192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.651 [INFO][5655] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.656 [INFO][5655] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.656 [INFO][5655] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.659 [INFO][5655] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1 Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.667 [INFO][5655] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.679 [INFO][5655] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.196/26] block=192.168.77.192/26 handle="k8s-pod-network.6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.679 [INFO][5655] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.196/26] handle="k8s-pod-network.6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.679 [INFO][5655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:13.693644 containerd[1816]: 2025-09-13 00:44:13.679 [INFO][5655] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.196/26] IPv6=[] ContainerID="6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" HandleID="k8s-pod-network.6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:13.694211 containerd[1816]: 2025-09-13 00:44:13.683 [INFO][5614] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sb6sx" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7760760a-7878-4b4e-8517-c3f27b4755d1", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"", Pod:"coredns-7c65d6cfc9-sb6sx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6d33eacf72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:13.694211 containerd[1816]: 2025-09-13 00:44:13.683 [INFO][5614] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.196/32] ContainerID="6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sb6sx" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:13.694211 containerd[1816]: 2025-09-13 00:44:13.683 [INFO][5614] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6d33eacf72 ContainerID="6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sb6sx" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:13.694211 containerd[1816]: 2025-09-13 00:44:13.686 [INFO][5614] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sb6sx" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:13.694211 containerd[1816]: 2025-09-13 00:44:13.686 [INFO][5614] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sb6sx" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7760760a-7878-4b4e-8517-c3f27b4755d1", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1", Pod:"coredns-7c65d6cfc9-sb6sx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6d33eacf72", MAC:"1a:d8:e3:e1:5e:b7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:13.694211 containerd[1816]: 2025-09-13 00:44:13.692 [INFO][5614] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sb6sx" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:13.702818 containerd[1816]: time="2025-09-13T00:44:13.702749576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:44:13.702818 containerd[1816]: time="2025-09-13T00:44:13.702782341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:44:13.702818 containerd[1816]: time="2025-09-13T00:44:13.702791141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:13.702915 containerd[1816]: time="2025-09-13T00:44:13.702830396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:13.726158 systemd[1]: Started cri-containerd-6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1.scope - libcontainer container 6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1. Sep 13 00:44:13.752412 containerd[1816]: time="2025-09-13T00:44:13.752388523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sb6sx,Uid:7760760a-7878-4b4e-8517-c3f27b4755d1,Namespace:kube-system,Attempt:1,} returns sandbox id \"6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1\"" Sep 13 00:44:13.753651 containerd[1816]: time="2025-09-13T00:44:13.753632648Z" level=info msg="CreateContainer within sandbox \"6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:44:13.758034 containerd[1816]: time="2025-09-13T00:44:13.757993168Z" level=info msg="CreateContainer within sandbox \"6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"18fa716a516b0ef8e9aef27ecef25e68f5abfe00b5682732d307638981c83705\"" Sep 13 00:44:13.758265 containerd[1816]: time="2025-09-13T00:44:13.758251852Z" level=info msg="StartContainer for \"18fa716a516b0ef8e9aef27ecef25e68f5abfe00b5682732d307638981c83705\"" Sep 13 00:44:13.782192 systemd[1]: Started cri-containerd-18fa716a516b0ef8e9aef27ecef25e68f5abfe00b5682732d307638981c83705.scope - libcontainer container 18fa716a516b0ef8e9aef27ecef25e68f5abfe00b5682732d307638981c83705. Sep 13 00:44:13.796530 containerd[1816]: time="2025-09-13T00:44:13.796495498Z" level=info msg="StartContainer for \"18fa716a516b0ef8e9aef27ecef25e68f5abfe00b5682732d307638981c83705\" returns successfully" Sep 13 00:44:14.136231 systemd-networkd[1605]: calia619552ec4b: Gained IPv6LL Sep 13 00:44:14.581698 kubelet[3069]: I0913 00:44:14.581612 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sb6sx" podStartSLOduration=32.581598123 podStartE2EDuration="32.581598123s" podCreationTimestamp="2025-09-13 00:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:44:14.581346778 +0000 UTC m=+39.188848852" watchObservedRunningTime="2025-09-13 00:44:14.581598123 +0000 UTC m=+39.189100197" Sep 13 00:44:15.352093 systemd-networkd[1605]: calie6d33eacf72: Gained IPv6LL Sep 13 00:44:15.416297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3780078617.mount: Deactivated successfully. Sep 13 00:44:15.438924 containerd[1816]: time="2025-09-13T00:44:15.438896442Z" level=info msg="StopPodSandbox for \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\"" Sep 13 00:44:15.480286 containerd[1816]: 2025-09-13 00:44:15.462 [INFO][5860] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Sep 13 00:44:15.480286 containerd[1816]: 2025-09-13 00:44:15.462 [INFO][5860] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" iface="eth0" netns="/var/run/netns/cni-7d4648cd-6529-c10b-c11b-b9e5c988caff" Sep 13 00:44:15.480286 containerd[1816]: 2025-09-13 00:44:15.463 [INFO][5860] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" iface="eth0" netns="/var/run/netns/cni-7d4648cd-6529-c10b-c11b-b9e5c988caff" Sep 13 00:44:15.480286 containerd[1816]: 2025-09-13 00:44:15.463 [INFO][5860] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" iface="eth0" netns="/var/run/netns/cni-7d4648cd-6529-c10b-c11b-b9e5c988caff" Sep 13 00:44:15.480286 containerd[1816]: 2025-09-13 00:44:15.463 [INFO][5860] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Sep 13 00:44:15.480286 containerd[1816]: 2025-09-13 00:44:15.463 [INFO][5860] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Sep 13 00:44:15.480286 containerd[1816]: 2025-09-13 00:44:15.473 [INFO][5878] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" HandleID="k8s-pod-network.d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Workload="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:15.480286 containerd[1816]: 2025-09-13 00:44:15.473 [INFO][5878] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:15.480286 containerd[1816]: 2025-09-13 00:44:15.473 [INFO][5878] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:15.480286 containerd[1816]: 2025-09-13 00:44:15.477 [WARNING][5878] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" HandleID="k8s-pod-network.d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Workload="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:15.480286 containerd[1816]: 2025-09-13 00:44:15.477 [INFO][5878] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" HandleID="k8s-pod-network.d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Workload="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:15.480286 containerd[1816]: 2025-09-13 00:44:15.478 [INFO][5878] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:15.480286 containerd[1816]: 2025-09-13 00:44:15.479 [INFO][5860] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Sep 13 00:44:15.480577 containerd[1816]: time="2025-09-13T00:44:15.480365930Z" level=info msg="TearDown network for sandbox \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\" successfully" Sep 13 00:44:15.480577 containerd[1816]: time="2025-09-13T00:44:15.480389397Z" level=info msg="StopPodSandbox for \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\" returns successfully" Sep 13 00:44:15.480808 containerd[1816]: time="2025-09-13T00:44:15.480794821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9v25t,Uid:c3e12566-608c-47e4-9de0-c2f38136e9e0,Namespace:calico-system,Attempt:1,}" Sep 13 00:44:15.482051 systemd[1]: run-netns-cni\x2d7d4648cd\x2d6529\x2dc10b\x2dc11b\x2db9e5c988caff.mount: Deactivated successfully. Sep 13 00:44:15.538204 systemd-networkd[1605]: calic9b380d2f20: Link UP Sep 13 00:44:15.538329 systemd-networkd[1605]: calic9b380d2f20: Gained carrier Sep 13 00:44:15.544068 systemd-networkd[1605]: cali76b825a70a1: Gained IPv6LL Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.503 [INFO][5894] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0 csi-node-driver- calico-system c3e12566-608c-47e4-9de0-c2f38136e9e0 951 0 2025-09-13 00:43:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-n-2af8d06a22 csi-node-driver-9v25t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic9b380d2f20 [] [] }} ContainerID="341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" Namespace="calico-system" Pod="csi-node-driver-9v25t" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-" Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.503 [INFO][5894] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" Namespace="calico-system" Pod="csi-node-driver-9v25t" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.516 [INFO][5918] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" HandleID="k8s-pod-network.341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" Workload="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.516 [INFO][5918] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" HandleID="k8s-pod-network.341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" Workload="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00052e790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-2af8d06a22", "pod":"csi-node-driver-9v25t", "timestamp":"2025-09-13 00:44:15.516543895 +0000 UTC"}, Hostname:"ci-4081.3.5-n-2af8d06a22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.516 [INFO][5918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.516 [INFO][5918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.516 [INFO][5918] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-2af8d06a22' Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.520 [INFO][5918] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.523 [INFO][5918] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.526 [INFO][5918] ipam/ipam.go 511: Trying affinity for 192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.528 [INFO][5918] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.529 [INFO][5918] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.529 [INFO][5918] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.530 [INFO][5918] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982 Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.533 [INFO][5918] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.536 [INFO][5918] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.197/26] block=192.168.77.192/26 handle="k8s-pod-network.341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.536 [INFO][5918] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.197/26] handle="k8s-pod-network.341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.536 [INFO][5918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:15.546007 containerd[1816]: 2025-09-13 00:44:15.536 [INFO][5918] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.197/26] IPv6=[] ContainerID="341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" HandleID="k8s-pod-network.341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" Workload="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:15.546568 containerd[1816]: 2025-09-13 00:44:15.537 [INFO][5894] cni-plugin/k8s.go 418: Populated endpoint ContainerID="341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" Namespace="calico-system" Pod="csi-node-driver-9v25t" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3e12566-608c-47e4-9de0-c2f38136e9e0", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"", Pod:"csi-node-driver-9v25t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9b380d2f20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:15.546568 containerd[1816]: 2025-09-13 00:44:15.537 [INFO][5894] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.197/32] ContainerID="341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" Namespace="calico-system" Pod="csi-node-driver-9v25t" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:15.546568 containerd[1816]: 2025-09-13 00:44:15.537 [INFO][5894] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9b380d2f20 ContainerID="341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" Namespace="calico-system" Pod="csi-node-driver-9v25t" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:15.546568 containerd[1816]: 2025-09-13 00:44:15.538 [INFO][5894] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" Namespace="calico-system" Pod="csi-node-driver-9v25t" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:15.546568 containerd[1816]: 2025-09-13 00:44:15.538 [INFO][5894] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" Namespace="calico-system" Pod="csi-node-driver-9v25t" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3e12566-608c-47e4-9de0-c2f38136e9e0", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982", Pod:"csi-node-driver-9v25t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9b380d2f20", MAC:"26:b7:95:20:01:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:15.546568 containerd[1816]: 2025-09-13 00:44:15.544 [INFO][5894] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982" Namespace="calico-system" Pod="csi-node-driver-9v25t" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:15.554218 containerd[1816]: time="2025-09-13T00:44:15.554178048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:44:15.554302 containerd[1816]: time="2025-09-13T00:44:15.554214293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:44:15.554302 containerd[1816]: time="2025-09-13T00:44:15.554245330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:15.554356 containerd[1816]: time="2025-09-13T00:44:15.554307868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:15.574173 systemd[1]: Started cri-containerd-341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982.scope - libcontainer container 341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982. Sep 13 00:44:15.584619 containerd[1816]: time="2025-09-13T00:44:15.584591147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9v25t,Uid:c3e12566-608c-47e4-9de0-c2f38136e9e0,Namespace:calico-system,Attempt:1,} returns sandbox id \"341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982\"" Sep 13 00:44:15.634690 containerd[1816]: time="2025-09-13T00:44:15.634668106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:15.634882 containerd[1816]: time="2025-09-13T00:44:15.634862023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:44:15.635196 containerd[1816]: time="2025-09-13T00:44:15.635156454Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:15.636415 containerd[1816]: time="2025-09-13T00:44:15.636374415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:15.636862 containerd[1816]: time="2025-09-13T00:44:15.636820931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 2.050556893s" Sep 13 00:44:15.636862 containerd[1816]: time="2025-09-13T00:44:15.636838145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:44:15.637395 containerd[1816]: time="2025-09-13T00:44:15.637380602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:44:15.637902 containerd[1816]: time="2025-09-13T00:44:15.637889633Z" level=info msg="CreateContainer within sandbox \"91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:44:15.644860 containerd[1816]: time="2025-09-13T00:44:15.644844638Z" level=info msg="CreateContainer within sandbox \"91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"502111430d285589f62cd007826726429f36b9ec37b4f2a5afb760843580165b\"" Sep 13 00:44:15.645097 containerd[1816]: time="2025-09-13T00:44:15.645085041Z" level=info msg="StartContainer for \"502111430d285589f62cd007826726429f36b9ec37b4f2a5afb760843580165b\"" Sep 13 00:44:15.666326 systemd[1]: Started cri-containerd-502111430d285589f62cd007826726429f36b9ec37b4f2a5afb760843580165b.scope - libcontainer container 502111430d285589f62cd007826726429f36b9ec37b4f2a5afb760843580165b. Sep 13 00:44:15.689076 containerd[1816]: time="2025-09-13T00:44:15.689055969Z" level=info msg="StartContainer for \"502111430d285589f62cd007826726429f36b9ec37b4f2a5afb760843580165b\" returns successfully" Sep 13 00:44:16.439880 containerd[1816]: time="2025-09-13T00:44:16.439754921Z" level=info msg="StopPodSandbox for \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\"" Sep 13 00:44:16.439880 containerd[1816]: time="2025-09-13T00:44:16.439840774Z" level=info msg="StopPodSandbox for \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\"" Sep 13 00:44:16.441065 containerd[1816]: time="2025-09-13T00:44:16.440142262Z" level=info msg="StopPodSandbox for \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\"" Sep 13 00:44:16.520055 containerd[1816]: 2025-09-13 00:44:16.498 [INFO][6074] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Sep 13 00:44:16.520055 containerd[1816]: 2025-09-13 00:44:16.498 [INFO][6074] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" iface="eth0" netns="/var/run/netns/cni-85fc48fc-6128-c3b1-3cf2-d3a918554370" Sep 13 00:44:16.520055 containerd[1816]: 2025-09-13 00:44:16.498 [INFO][6074] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" iface="eth0" netns="/var/run/netns/cni-85fc48fc-6128-c3b1-3cf2-d3a918554370" Sep 13 00:44:16.520055 containerd[1816]: 2025-09-13 00:44:16.499 [INFO][6074] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" iface="eth0" netns="/var/run/netns/cni-85fc48fc-6128-c3b1-3cf2-d3a918554370" Sep 13 00:44:16.520055 containerd[1816]: 2025-09-13 00:44:16.499 [INFO][6074] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Sep 13 00:44:16.520055 containerd[1816]: 2025-09-13 00:44:16.499 [INFO][6074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Sep 13 00:44:16.520055 containerd[1816]: 2025-09-13 00:44:16.513 [INFO][6118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" HandleID="k8s-pod-network.db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:16.520055 containerd[1816]: 2025-09-13 00:44:16.513 [INFO][6118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:16.520055 containerd[1816]: 2025-09-13 00:44:16.513 [INFO][6118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:16.520055 containerd[1816]: 2025-09-13 00:44:16.517 [WARNING][6118] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" HandleID="k8s-pod-network.db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:16.520055 containerd[1816]: 2025-09-13 00:44:16.517 [INFO][6118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" HandleID="k8s-pod-network.db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:16.520055 containerd[1816]: 2025-09-13 00:44:16.518 [INFO][6118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:16.520055 containerd[1816]: 2025-09-13 00:44:16.519 [INFO][6074] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Sep 13 00:44:16.520377 containerd[1816]: time="2025-09-13T00:44:16.520139062Z" level=info msg="TearDown network for sandbox \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\" successfully" Sep 13 00:44:16.520377 containerd[1816]: time="2025-09-13T00:44:16.520157142Z" level=info msg="StopPodSandbox for \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\" returns successfully" Sep 13 00:44:16.520590 containerd[1816]: time="2025-09-13T00:44:16.520572642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cdbb444-7d78s,Uid:b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:44:16.524198 containerd[1816]: 2025-09-13 00:44:16.500 [INFO][6073] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Sep 13 00:44:16.524198 containerd[1816]: 2025-09-13 00:44:16.500 [INFO][6073] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" iface="eth0" netns="/var/run/netns/cni-fd023193-f614-d0dd-c20d-356221fa340c" Sep 13 00:44:16.524198 containerd[1816]: 2025-09-13 00:44:16.500 [INFO][6073] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" iface="eth0" netns="/var/run/netns/cni-fd023193-f614-d0dd-c20d-356221fa340c" Sep 13 00:44:16.524198 containerd[1816]: 2025-09-13 00:44:16.500 [INFO][6073] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" iface="eth0" netns="/var/run/netns/cni-fd023193-f614-d0dd-c20d-356221fa340c" Sep 13 00:44:16.524198 containerd[1816]: 2025-09-13 00:44:16.500 [INFO][6073] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Sep 13 00:44:16.524198 containerd[1816]: 2025-09-13 00:44:16.500 [INFO][6073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Sep 13 00:44:16.524198 containerd[1816]: 2025-09-13 00:44:16.513 [INFO][6122] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" HandleID="k8s-pod-network.38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:16.524198 containerd[1816]: 2025-09-13 00:44:16.513 [INFO][6122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:16.524198 containerd[1816]: 2025-09-13 00:44:16.518 [INFO][6122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:16.524198 containerd[1816]: 2025-09-13 00:44:16.522 [WARNING][6122] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" HandleID="k8s-pod-network.38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:16.524198 containerd[1816]: 2025-09-13 00:44:16.522 [INFO][6122] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" HandleID="k8s-pod-network.38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:16.524198 containerd[1816]: 2025-09-13 00:44:16.522 [INFO][6122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:16.524198 containerd[1816]: 2025-09-13 00:44:16.523 [INFO][6073] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Sep 13 00:44:16.524507 containerd[1816]: time="2025-09-13T00:44:16.524242551Z" level=info msg="TearDown network for sandbox \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\" successfully" Sep 13 00:44:16.524507 containerd[1816]: time="2025-09-13T00:44:16.524259391Z" level=info msg="StopPodSandbox for \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\" returns successfully" Sep 13 00:44:16.524597 containerd[1816]: time="2025-09-13T00:44:16.524559322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d7d549755-rwftt,Uid:72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11,Namespace:calico-system,Attempt:1,}" Sep 13 00:44:16.528241 containerd[1816]: 2025-09-13 00:44:16.499 [INFO][6072] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Sep 13 00:44:16.528241 containerd[1816]: 2025-09-13 00:44:16.499 [INFO][6072] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" iface="eth0" netns="/var/run/netns/cni-be84bcaf-b927-11db-99e5-c4544b443612" Sep 13 00:44:16.528241 containerd[1816]: 2025-09-13 00:44:16.499 [INFO][6072] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" iface="eth0" netns="/var/run/netns/cni-be84bcaf-b927-11db-99e5-c4544b443612" Sep 13 00:44:16.528241 containerd[1816]: 2025-09-13 00:44:16.500 [INFO][6072] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" iface="eth0" netns="/var/run/netns/cni-be84bcaf-b927-11db-99e5-c4544b443612" Sep 13 00:44:16.528241 containerd[1816]: 2025-09-13 00:44:16.500 [INFO][6072] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Sep 13 00:44:16.528241 containerd[1816]: 2025-09-13 00:44:16.500 [INFO][6072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Sep 13 00:44:16.528241 containerd[1816]: 2025-09-13 00:44:16.513 [INFO][6120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" HandleID="k8s-pod-network.30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:16.528241 containerd[1816]: 2025-09-13 00:44:16.513 [INFO][6120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:16.528241 containerd[1816]: 2025-09-13 00:44:16.523 [INFO][6120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:16.528241 containerd[1816]: 2025-09-13 00:44:16.525 [WARNING][6120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" HandleID="k8s-pod-network.30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:16.528241 containerd[1816]: 2025-09-13 00:44:16.525 [INFO][6120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" HandleID="k8s-pod-network.30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:16.528241 containerd[1816]: 2025-09-13 00:44:16.526 [INFO][6120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:16.528241 containerd[1816]: 2025-09-13 00:44:16.527 [INFO][6072] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Sep 13 00:44:16.528484 containerd[1816]: time="2025-09-13T00:44:16.528256015Z" level=info msg="TearDown network for sandbox \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\" successfully" Sep 13 00:44:16.528484 containerd[1816]: time="2025-09-13T00:44:16.528266190Z" level=info msg="StopPodSandbox for \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\" returns successfully" Sep 13 00:44:16.528527 containerd[1816]: time="2025-09-13T00:44:16.528487275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cdbb444-r7n8m,Uid:1f143f97-3706-4d57-9a71-b20a0d04227b,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:44:16.584390 kubelet[3069]: I0913 00:44:16.584351 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-bjldk" podStartSLOduration=22.533109498 podStartE2EDuration="24.584339753s" podCreationTimestamp="2025-09-13 00:43:52 +0000 UTC" firstStartedPulling="2025-09-13 00:44:13.586076938 +0000 UTC m=+38.193579011" lastFinishedPulling="2025-09-13 00:44:15.637307193 +0000 UTC m=+40.244809266" observedRunningTime="2025-09-13 00:44:16.584043206 +0000 UTC m=+41.191545278" watchObservedRunningTime="2025-09-13 00:44:16.584339753 +0000 UTC m=+41.191841820" Sep 13 00:44:16.591783 systemd-networkd[1605]: calid8b8b59390a: Link UP Sep 13 00:44:16.591931 systemd-networkd[1605]: calid8b8b59390a: Gained carrier Sep 13 00:44:16.592103 systemd[1]: run-netns-cni\x2d85fc48fc\x2d6128\x2dc3b1\x2d3cf2\x2dd3a918554370.mount: Deactivated successfully. Sep 13 00:44:16.592156 systemd[1]: run-netns-cni\x2dbe84bcaf\x2db927\x2d11db\x2d99e5\x2dc4544b443612.mount: Deactivated successfully. Sep 13 00:44:16.592192 systemd[1]: run-netns-cni\x2dfd023193\x2df614\x2dd0dd\x2dc20d\x2d356221fa340c.mount: Deactivated successfully. Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.561 [INFO][6194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0 calico-apiserver-6cdbb444- calico-apiserver 1f143f97-3706-4d57-9a71-b20a0d04227b 964 0 2025-09-13 00:43:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cdbb444 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-2af8d06a22 calico-apiserver-6cdbb444-r7n8m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid8b8b59390a [] [] }} ContainerID="706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-r7n8m" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-" Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.561 [INFO][6194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-r7n8m" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.573 [INFO][6244] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" HandleID="k8s-pod-network.706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.573 [INFO][6244] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" HandleID="k8s-pod-network.706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000592af0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-2af8d06a22", "pod":"calico-apiserver-6cdbb444-r7n8m", "timestamp":"2025-09-13 00:44:16.573514636 +0000 UTC"}, Hostname:"ci-4081.3.5-n-2af8d06a22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.573 [INFO][6244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.573 [INFO][6244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.573 [INFO][6244] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-2af8d06a22' Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.577 [INFO][6244] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.579 [INFO][6244] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.581 [INFO][6244] ipam/ipam.go 511: Trying affinity for 192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.582 [INFO][6244] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.583 [INFO][6244] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.583 [INFO][6244] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.584 [INFO][6244] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01 Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.586 [INFO][6244] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.589 [INFO][6244] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.198/26] block=192.168.77.192/26 handle="k8s-pod-network.706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.589 [INFO][6244] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.198/26] handle="k8s-pod-network.706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.589 [INFO][6244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:16.597819 containerd[1816]: 2025-09-13 00:44:16.589 [INFO][6244] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.198/26] IPv6=[] ContainerID="706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" HandleID="k8s-pod-network.706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:16.598225 containerd[1816]: 2025-09-13 00:44:16.590 [INFO][6194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-r7n8m" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0", GenerateName:"calico-apiserver-6cdbb444-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f143f97-3706-4d57-9a71-b20a0d04227b", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cdbb444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"", Pod:"calico-apiserver-6cdbb444-r7n8m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8b8b59390a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:16.598225 containerd[1816]: 2025-09-13 00:44:16.590 [INFO][6194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.198/32] ContainerID="706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-r7n8m" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:16.598225 containerd[1816]: 2025-09-13 00:44:16.590 [INFO][6194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8b8b59390a ContainerID="706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-r7n8m" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:16.598225 containerd[1816]: 2025-09-13 00:44:16.592 [INFO][6194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-r7n8m" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:16.598225 containerd[1816]: 2025-09-13 00:44:16.592 [INFO][6194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-r7n8m" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0", GenerateName:"calico-apiserver-6cdbb444-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f143f97-3706-4d57-9a71-b20a0d04227b", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cdbb444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01", Pod:"calico-apiserver-6cdbb444-r7n8m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8b8b59390a", MAC:"b6:ea:79:4f:11:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:16.598225 containerd[1816]: 2025-09-13 00:44:16.597 [INFO][6194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-r7n8m" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:16.606165 containerd[1816]: time="2025-09-13T00:44:16.606122429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:44:16.606165 containerd[1816]: time="2025-09-13T00:44:16.606153055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:44:16.606165 containerd[1816]: time="2025-09-13T00:44:16.606160266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:16.606284 containerd[1816]: time="2025-09-13T00:44:16.606238754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:16.625241 systemd[1]: Started cri-containerd-706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01.scope - libcontainer container 706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01. Sep 13 00:44:16.649401 containerd[1816]: time="2025-09-13T00:44:16.649379407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cdbb444-r7n8m,Uid:1f143f97-3706-4d57-9a71-b20a0d04227b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01\"" Sep 13 00:44:16.694116 systemd-networkd[1605]: califae0ad6261b: Link UP Sep 13 00:44:16.694306 systemd-networkd[1605]: califae0ad6261b: Gained carrier Sep 13 00:44:16.697075 systemd-networkd[1605]: calic9b380d2f20: Gained IPv6LL Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.561 [INFO][6172] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0 calico-apiserver-6cdbb444- calico-apiserver b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0 963 0 2025-09-13 00:43:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cdbb444 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-2af8d06a22 calico-apiserver-6cdbb444-7d78s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califae0ad6261b [] [] }} ContainerID="4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-7d78s" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-" Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.561 [INFO][6172] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-7d78s" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.573 [INFO][6243] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" HandleID="k8s-pod-network.4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.573 [INFO][6243] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" HandleID="k8s-pod-network.4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ff30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-2af8d06a22", "pod":"calico-apiserver-6cdbb444-7d78s", "timestamp":"2025-09-13 00:44:16.57354337 +0000 UTC"}, Hostname:"ci-4081.3.5-n-2af8d06a22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.573 [INFO][6243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.589 [INFO][6243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.589 [INFO][6243] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-2af8d06a22' Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.678 [INFO][6243] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.680 [INFO][6243] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.682 [INFO][6243] ipam/ipam.go 511: Trying affinity for 192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.683 [INFO][6243] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.685 [INFO][6243] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.685 [INFO][6243] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.686 [INFO][6243] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.688 [INFO][6243] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.691 [INFO][6243] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.199/26] block=192.168.77.192/26 handle="k8s-pod-network.4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.691 [INFO][6243] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.199/26] handle="k8s-pod-network.4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.691 [INFO][6243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:16.702126 containerd[1816]: 2025-09-13 00:44:16.691 [INFO][6243] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.199/26] IPv6=[] ContainerID="4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" HandleID="k8s-pod-network.4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:16.702729 containerd[1816]: 2025-09-13 00:44:16.692 [INFO][6172] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-7d78s" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0", GenerateName:"calico-apiserver-6cdbb444-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cdbb444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"", Pod:"calico-apiserver-6cdbb444-7d78s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califae0ad6261b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:16.702729 containerd[1816]: 2025-09-13 00:44:16.693 [INFO][6172] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.199/32] ContainerID="4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-7d78s" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:16.702729 containerd[1816]: 2025-09-13 00:44:16.693 [INFO][6172] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califae0ad6261b ContainerID="4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-7d78s" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:16.702729 containerd[1816]: 2025-09-13 00:44:16.694 [INFO][6172] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-7d78s" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:16.702729 containerd[1816]: 2025-09-13 00:44:16.694 [INFO][6172] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-7d78s" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0", GenerateName:"calico-apiserver-6cdbb444-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cdbb444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e", Pod:"calico-apiserver-6cdbb444-7d78s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califae0ad6261b", MAC:"a6:c6:9a:8b:18:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:16.702729 containerd[1816]: 2025-09-13 00:44:16.700 [INFO][6172] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e" Namespace="calico-apiserver" Pod="calico-apiserver-6cdbb444-7d78s" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:16.710992 containerd[1816]: time="2025-09-13T00:44:16.710929353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:44:16.711180 containerd[1816]: time="2025-09-13T00:44:16.711157806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:44:16.711180 containerd[1816]: time="2025-09-13T00:44:16.711170000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:16.711303 containerd[1816]: time="2025-09-13T00:44:16.711250658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:16.732097 systemd[1]: Started cri-containerd-4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e.scope - libcontainer container 4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e. Sep 13 00:44:16.755240 containerd[1816]: time="2025-09-13T00:44:16.755218845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cdbb444-7d78s,Uid:b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e\"" Sep 13 00:44:16.798119 systemd-networkd[1605]: cali565053d1837: Link UP Sep 13 00:44:16.798336 systemd-networkd[1605]: cali565053d1837: Gained carrier Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.561 [INFO][6179] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0 calico-kube-controllers-6d7d549755- calico-system 72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11 965 0 2025-09-13 00:43:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d7d549755 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-n-2af8d06a22 calico-kube-controllers-6d7d549755-rwftt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali565053d1837 [] [] }} ContainerID="9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" Namespace="calico-system" Pod="calico-kube-controllers-6d7d549755-rwftt" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-" Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.561 [INFO][6179] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" Namespace="calico-system" Pod="calico-kube-controllers-6d7d549755-rwftt" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.573 [INFO][6241] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" HandleID="k8s-pod-network.9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.574 [INFO][6241] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" HandleID="k8s-pod-network.9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043a430), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-2af8d06a22", "pod":"calico-kube-controllers-6d7d549755-rwftt", "timestamp":"2025-09-13 00:44:16.573962553 +0000 UTC"}, Hostname:"ci-4081.3.5-n-2af8d06a22", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.574 [INFO][6241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.691 [INFO][6241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.691 [INFO][6241] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-2af8d06a22' Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.778 [INFO][6241] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.781 [INFO][6241] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.784 [INFO][6241] ipam/ipam.go 511: Trying affinity for 192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.785 [INFO][6241] ipam/ipam.go 158: Attempting to load block cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.787 [INFO][6241] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.77.192/26 host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.787 [INFO][6241] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.77.192/26 handle="k8s-pod-network.9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.788 [INFO][6241] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.790 [INFO][6241] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.77.192/26 handle="k8s-pod-network.9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.794 [INFO][6241] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.77.200/26] block=192.168.77.192/26 handle="k8s-pod-network.9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.794 [INFO][6241] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.77.200/26] handle="k8s-pod-network.9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" host="ci-4081.3.5-n-2af8d06a22" Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.795 [INFO][6241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:16.808037 containerd[1816]: 2025-09-13 00:44:16.795 [INFO][6241] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.77.200/26] IPv6=[] ContainerID="9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" HandleID="k8s-pod-network.9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:16.808819 containerd[1816]: 2025-09-13 00:44:16.796 [INFO][6179] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" Namespace="calico-system" Pod="calico-kube-controllers-6d7d549755-rwftt" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0", GenerateName:"calico-kube-controllers-6d7d549755-", Namespace:"calico-system", SelfLink:"", UID:"72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d7d549755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"", Pod:"calico-kube-controllers-6d7d549755-rwftt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali565053d1837", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:16.808819 containerd[1816]: 2025-09-13 00:44:16.796 [INFO][6179] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.77.200/32] ContainerID="9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" Namespace="calico-system" Pod="calico-kube-controllers-6d7d549755-rwftt" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:16.808819 containerd[1816]: 2025-09-13 00:44:16.796 [INFO][6179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali565053d1837 ContainerID="9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" Namespace="calico-system" Pod="calico-kube-controllers-6d7d549755-rwftt" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:16.808819 containerd[1816]: 2025-09-13 00:44:16.798 [INFO][6179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" Namespace="calico-system" Pod="calico-kube-controllers-6d7d549755-rwftt" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:16.808819 containerd[1816]: 2025-09-13 00:44:16.798 [INFO][6179] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" Namespace="calico-system" Pod="calico-kube-controllers-6d7d549755-rwftt" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0", GenerateName:"calico-kube-controllers-6d7d549755-", Namespace:"calico-system", SelfLink:"", UID:"72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d7d549755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f", Pod:"calico-kube-controllers-6d7d549755-rwftt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali565053d1837", MAC:"b6:80:ff:46:19:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:16.808819 containerd[1816]: 2025-09-13 00:44:16.806 [INFO][6179] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f" Namespace="calico-system" Pod="calico-kube-controllers-6d7d549755-rwftt" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:16.817257 containerd[1816]: time="2025-09-13T00:44:16.817181599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:44:16.817405 containerd[1816]: time="2025-09-13T00:44:16.817386540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:44:16.817405 containerd[1816]: time="2025-09-13T00:44:16.817396182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:16.817461 containerd[1816]: time="2025-09-13T00:44:16.817438919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:44:16.841204 systemd[1]: Started cri-containerd-9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f.scope - libcontainer container 9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f. Sep 13 00:44:16.870911 containerd[1816]: time="2025-09-13T00:44:16.870861766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d7d549755-rwftt,Uid:72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11,Namespace:calico-system,Attempt:1,} returns sandbox id \"9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f\"" Sep 13 00:44:17.427812 containerd[1816]: time="2025-09-13T00:44:17.427783944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:17.427990 containerd[1816]: time="2025-09-13T00:44:17.427962817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:44:17.428366 containerd[1816]: time="2025-09-13T00:44:17.428353996Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:17.429759 containerd[1816]: time="2025-09-13T00:44:17.429747009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:17.430095 containerd[1816]: time="2025-09-13T00:44:17.430069889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.792671505s" Sep 13 00:44:17.430118 containerd[1816]: time="2025-09-13T00:44:17.430099470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:44:17.430592 containerd[1816]: time="2025-09-13T00:44:17.430581010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:44:17.431252 containerd[1816]: time="2025-09-13T00:44:17.431240982Z" level=info msg="CreateContainer within sandbox \"341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:44:17.437100 containerd[1816]: time="2025-09-13T00:44:17.437070349Z" level=info msg="CreateContainer within sandbox \"341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"510b5915f844dbc76b22ce694bf1c2126c99fa195699ad67f8ae4255c5a785bd\"" Sep 13 00:44:17.437333 containerd[1816]: time="2025-09-13T00:44:17.437321675Z" level=info msg="StartContainer for \"510b5915f844dbc76b22ce694bf1c2126c99fa195699ad67f8ae4255c5a785bd\"" Sep 13 00:44:17.460333 systemd[1]: Started cri-containerd-510b5915f844dbc76b22ce694bf1c2126c99fa195699ad67f8ae4255c5a785bd.scope - libcontainer container 510b5915f844dbc76b22ce694bf1c2126c99fa195699ad67f8ae4255c5a785bd. Sep 13 00:44:17.473733 containerd[1816]: time="2025-09-13T00:44:17.473708731Z" level=info msg="StartContainer for \"510b5915f844dbc76b22ce694bf1c2126c99fa195699ad67f8ae4255c5a785bd\" returns successfully" Sep 13 00:44:18.104360 systemd-networkd[1605]: calid8b8b59390a: Gained IPv6LL Sep 13 00:44:18.552281 systemd-networkd[1605]: califae0ad6261b: Gained IPv6LL Sep 13 00:44:18.552786 systemd-networkd[1605]: cali565053d1837: Gained IPv6LL Sep 13 00:44:19.309801 containerd[1816]: time="2025-09-13T00:44:19.309769969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:19.310147 containerd[1816]: time="2025-09-13T00:44:19.309954622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:44:19.310344 containerd[1816]: time="2025-09-13T00:44:19.310332392Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:19.311490 containerd[1816]: time="2025-09-13T00:44:19.311476538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:19.311923 containerd[1816]: time="2025-09-13T00:44:19.311908897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 1.881312443s" Sep 13 00:44:19.311951 containerd[1816]: time="2025-09-13T00:44:19.311927252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:44:19.312432 containerd[1816]: time="2025-09-13T00:44:19.312416819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:44:19.313000 containerd[1816]: time="2025-09-13T00:44:19.312987463Z" level=info msg="CreateContainer within sandbox \"706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:44:19.319083 containerd[1816]: time="2025-09-13T00:44:19.319057409Z" level=info msg="CreateContainer within sandbox \"706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0087c90aebee18b1be92abe15b1b81eb3041db995093a7ef1b0bf1af8ebde32d\"" Sep 13 00:44:19.319337 containerd[1816]: time="2025-09-13T00:44:19.319294645Z" level=info msg="StartContainer for \"0087c90aebee18b1be92abe15b1b81eb3041db995093a7ef1b0bf1af8ebde32d\"" Sep 13 00:44:19.348154 systemd[1]: Started cri-containerd-0087c90aebee18b1be92abe15b1b81eb3041db995093a7ef1b0bf1af8ebde32d.scope - libcontainer container 0087c90aebee18b1be92abe15b1b81eb3041db995093a7ef1b0bf1af8ebde32d. Sep 13 00:44:19.372623 containerd[1816]: time="2025-09-13T00:44:19.372602353Z" level=info msg="StartContainer for \"0087c90aebee18b1be92abe15b1b81eb3041db995093a7ef1b0bf1af8ebde32d\" returns successfully" Sep 13 00:44:19.604976 kubelet[3069]: I0913 00:44:19.604905 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cdbb444-r7n8m" podStartSLOduration=26.942406725 podStartE2EDuration="29.60489313s" podCreationTimestamp="2025-09-13 00:43:50 +0000 UTC" firstStartedPulling="2025-09-13 00:44:16.64986595 +0000 UTC m=+41.257368021" lastFinishedPulling="2025-09-13 00:44:19.312352352 +0000 UTC m=+43.919854426" observedRunningTime="2025-09-13 00:44:19.604552741 +0000 UTC m=+44.212054813" watchObservedRunningTime="2025-09-13 00:44:19.60489313 +0000 UTC m=+44.212395197" Sep 13 00:44:19.660917 containerd[1816]: time="2025-09-13T00:44:19.660863364Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:19.661128 containerd[1816]: time="2025-09-13T00:44:19.661077560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:44:19.662601 containerd[1816]: time="2025-09-13T00:44:19.662586055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 350.149716ms" Sep 13 00:44:19.662636 containerd[1816]: time="2025-09-13T00:44:19.662603118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:44:19.663352 containerd[1816]: time="2025-09-13T00:44:19.663326092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:44:19.663871 containerd[1816]: time="2025-09-13T00:44:19.663855177Z" level=info msg="CreateContainer within sandbox \"4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:44:19.667735 containerd[1816]: time="2025-09-13T00:44:19.667722752Z" level=info msg="CreateContainer within sandbox \"4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3a8bbca9d8090862005f2b9b3563eeb3d5a56076b4deca74ceb70afea5829b89\"" Sep 13 00:44:19.668008 containerd[1816]: time="2025-09-13T00:44:19.667991468Z" level=info msg="StartContainer for \"3a8bbca9d8090862005f2b9b3563eeb3d5a56076b4deca74ceb70afea5829b89\"" Sep 13 00:44:19.693162 systemd[1]: Started cri-containerd-3a8bbca9d8090862005f2b9b3563eeb3d5a56076b4deca74ceb70afea5829b89.scope - libcontainer container 3a8bbca9d8090862005f2b9b3563eeb3d5a56076b4deca74ceb70afea5829b89. Sep 13 00:44:19.715971 containerd[1816]: time="2025-09-13T00:44:19.715951362Z" level=info msg="StartContainer for \"3a8bbca9d8090862005f2b9b3563eeb3d5a56076b4deca74ceb70afea5829b89\" returns successfully" Sep 13 00:44:21.617880 kubelet[3069]: I0913 00:44:21.617832 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cdbb444-7d78s" podStartSLOduration=28.71039129 podStartE2EDuration="31.617814959s" podCreationTimestamp="2025-09-13 00:43:50 +0000 UTC" firstStartedPulling="2025-09-13 00:44:16.755762268 +0000 UTC m=+41.363264341" lastFinishedPulling="2025-09-13 00:44:19.663185941 +0000 UTC m=+44.270688010" observedRunningTime="2025-09-13 00:44:20.625242783 +0000 UTC m=+45.232744931" watchObservedRunningTime="2025-09-13 00:44:21.617814959 +0000 UTC m=+46.225317027" Sep 13 00:44:21.679715 containerd[1816]: time="2025-09-13T00:44:21.679690108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:21.680001 containerd[1816]: time="2025-09-13T00:44:21.679903153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:44:21.680215 containerd[1816]: time="2025-09-13T00:44:21.680201164Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:21.681595 containerd[1816]: time="2025-09-13T00:44:21.681544869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:21.681846 containerd[1816]: time="2025-09-13T00:44:21.681830532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.018488881s" Sep 13 00:44:21.681917 containerd[1816]: time="2025-09-13T00:44:21.681850541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:44:21.682365 containerd[1816]: time="2025-09-13T00:44:21.682350699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:44:21.685255 containerd[1816]: time="2025-09-13T00:44:21.685235634Z" level=info msg="CreateContainer within sandbox \"9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:44:21.689837 containerd[1816]: time="2025-09-13T00:44:21.689791587Z" level=info msg="CreateContainer within sandbox \"9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"637dc9202dde1f4bf9bf1f74a6ebe3d3780b55c90dcd9ed79e4eda2907ca3b3c\"" Sep 13 00:44:21.690078 containerd[1816]: time="2025-09-13T00:44:21.690064803Z" level=info msg="StartContainer for \"637dc9202dde1f4bf9bf1f74a6ebe3d3780b55c90dcd9ed79e4eda2907ca3b3c\"" Sep 13 00:44:21.711310 systemd[1]: Started cri-containerd-637dc9202dde1f4bf9bf1f74a6ebe3d3780b55c90dcd9ed79e4eda2907ca3b3c.scope - libcontainer container 637dc9202dde1f4bf9bf1f74a6ebe3d3780b55c90dcd9ed79e4eda2907ca3b3c. Sep 13 00:44:21.735863 containerd[1816]: time="2025-09-13T00:44:21.735839308Z" level=info msg="StartContainer for \"637dc9202dde1f4bf9bf1f74a6ebe3d3780b55c90dcd9ed79e4eda2907ca3b3c\" returns successfully" Sep 13 00:44:22.620575 kubelet[3069]: I0913 00:44:22.620532 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d7d549755-rwftt" podStartSLOduration=24.809731891 podStartE2EDuration="29.620517269s" podCreationTimestamp="2025-09-13 00:43:53 +0000 UTC" firstStartedPulling="2025-09-13 00:44:16.87150456 +0000 UTC m=+41.479006638" lastFinishedPulling="2025-09-13 00:44:21.682289946 +0000 UTC m=+46.289792016" observedRunningTime="2025-09-13 00:44:22.620439837 +0000 UTC m=+47.227941907" watchObservedRunningTime="2025-09-13 00:44:22.620517269 +0000 UTC m=+47.228019337" Sep 13 00:44:23.429405 containerd[1816]: time="2025-09-13T00:44:23.429356898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:23.429611 containerd[1816]: time="2025-09-13T00:44:23.429556405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:44:23.430055 containerd[1816]: time="2025-09-13T00:44:23.430005866Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:23.431061 containerd[1816]: time="2025-09-13T00:44:23.431031511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:44:23.431470 containerd[1816]: time="2025-09-13T00:44:23.431428172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 1.749061909s" Sep 13 00:44:23.431470 containerd[1816]: time="2025-09-13T00:44:23.431444720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:44:23.432443 containerd[1816]: time="2025-09-13T00:44:23.432397295Z" level=info msg="CreateContainer within sandbox \"341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:44:23.439996 containerd[1816]: time="2025-09-13T00:44:23.439981584Z" level=info msg="CreateContainer within sandbox \"341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bbb0e0ec2b82ac8f38aeb4c81431ce741243deb75275df6bddb22143d57fb4cc\"" Sep 13 00:44:23.440170 containerd[1816]: time="2025-09-13T00:44:23.440131078Z" level=info msg="StartContainer for \"bbb0e0ec2b82ac8f38aeb4c81431ce741243deb75275df6bddb22143d57fb4cc\"" Sep 13 00:44:23.470276 systemd[1]: Started cri-containerd-bbb0e0ec2b82ac8f38aeb4c81431ce741243deb75275df6bddb22143d57fb4cc.scope - libcontainer container bbb0e0ec2b82ac8f38aeb4c81431ce741243deb75275df6bddb22143d57fb4cc. Sep 13 00:44:23.489521 containerd[1816]: time="2025-09-13T00:44:23.489493658Z" level=info msg="StartContainer for \"bbb0e0ec2b82ac8f38aeb4c81431ce741243deb75275df6bddb22143d57fb4cc\" returns successfully" Sep 13 00:44:23.647442 kubelet[3069]: I0913 00:44:23.647321 3069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9v25t" podStartSLOduration=22.8006892 podStartE2EDuration="30.647283699s" podCreationTimestamp="2025-09-13 00:43:53 +0000 UTC" firstStartedPulling="2025-09-13 00:44:15.585157765 +0000 UTC m=+40.192659842" lastFinishedPulling="2025-09-13 00:44:23.431752271 +0000 UTC m=+48.039254341" observedRunningTime="2025-09-13 00:44:23.646654922 +0000 UTC m=+48.254157069" watchObservedRunningTime="2025-09-13 00:44:23.647283699 +0000 UTC m=+48.254785838" Sep 13 00:44:24.482318 kubelet[3069]: I0913 00:44:24.482207 3069 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:44:24.482318 kubelet[3069]: I0913 00:44:24.482280 3069 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:44:35.436860 containerd[1816]: time="2025-09-13T00:44:35.436634040Z" level=info msg="StopPodSandbox for \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\"" Sep 13 00:44:35.523264 containerd[1816]: 2025-09-13 00:44:35.496 [WARNING][6816] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0", GenerateName:"calico-apiserver-6cdbb444-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f143f97-3706-4d57-9a71-b20a0d04227b", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cdbb444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01", Pod:"calico-apiserver-6cdbb444-r7n8m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8b8b59390a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:35.523264 containerd[1816]: 2025-09-13 00:44:35.496 [INFO][6816] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Sep 13 00:44:35.523264 containerd[1816]: 2025-09-13 00:44:35.496 [INFO][6816] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" iface="eth0" netns="" Sep 13 00:44:35.523264 containerd[1816]: 2025-09-13 00:44:35.496 [INFO][6816] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Sep 13 00:44:35.523264 containerd[1816]: 2025-09-13 00:44:35.496 [INFO][6816] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Sep 13 00:44:35.523264 containerd[1816]: 2025-09-13 00:44:35.514 [INFO][6834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" HandleID="k8s-pod-network.30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:35.523264 containerd[1816]: 2025-09-13 00:44:35.514 [INFO][6834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:35.523264 containerd[1816]: 2025-09-13 00:44:35.514 [INFO][6834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:35.523264 containerd[1816]: 2025-09-13 00:44:35.520 [WARNING][6834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" HandleID="k8s-pod-network.30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:35.523264 containerd[1816]: 2025-09-13 00:44:35.520 [INFO][6834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" HandleID="k8s-pod-network.30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:35.523264 containerd[1816]: 2025-09-13 00:44:35.521 [INFO][6834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:35.523264 containerd[1816]: 2025-09-13 00:44:35.522 [INFO][6816] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Sep 13 00:44:35.523693 containerd[1816]: time="2025-09-13T00:44:35.523291632Z" level=info msg="TearDown network for sandbox \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\" successfully" Sep 13 00:44:35.523693 containerd[1816]: time="2025-09-13T00:44:35.523312425Z" level=info msg="StopPodSandbox for \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\" returns successfully" Sep 13 00:44:35.523747 containerd[1816]: time="2025-09-13T00:44:35.523728743Z" level=info msg="RemovePodSandbox for \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\"" Sep 13 00:44:35.523781 containerd[1816]: time="2025-09-13T00:44:35.523753054Z" level=info msg="Forcibly stopping sandbox \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\"" Sep 13 00:44:35.583245 containerd[1816]: 2025-09-13 00:44:35.547 [WARNING][6862] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0", GenerateName:"calico-apiserver-6cdbb444-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f143f97-3706-4d57-9a71-b20a0d04227b", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cdbb444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"706fec495d03631258c5f38efacb3a7f0c864e1033f9b657fd4354165b7a5a01", Pod:"calico-apiserver-6cdbb444-r7n8m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8b8b59390a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:35.583245 containerd[1816]: 2025-09-13 00:44:35.547 [INFO][6862] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Sep 13 00:44:35.583245 containerd[1816]: 2025-09-13 00:44:35.547 [INFO][6862] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" iface="eth0" netns="" Sep 13 00:44:35.583245 containerd[1816]: 2025-09-13 00:44:35.547 [INFO][6862] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Sep 13 00:44:35.583245 containerd[1816]: 2025-09-13 00:44:35.547 [INFO][6862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Sep 13 00:44:35.583245 containerd[1816]: 2025-09-13 00:44:35.561 [INFO][6880] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" HandleID="k8s-pod-network.30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:35.583245 containerd[1816]: 2025-09-13 00:44:35.561 [INFO][6880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:35.583245 containerd[1816]: 2025-09-13 00:44:35.561 [INFO][6880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:35.583245 containerd[1816]: 2025-09-13 00:44:35.566 [WARNING][6880] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" HandleID="k8s-pod-network.30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:35.583245 containerd[1816]: 2025-09-13 00:44:35.567 [INFO][6880] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" HandleID="k8s-pod-network.30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--r7n8m-eth0" Sep 13 00:44:35.583245 containerd[1816]: 2025-09-13 00:44:35.568 [INFO][6880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:35.583245 containerd[1816]: 2025-09-13 00:44:35.573 [INFO][6862] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339" Sep 13 00:44:35.583746 containerd[1816]: time="2025-09-13T00:44:35.583256744Z" level=info msg="TearDown network for sandbox \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\" successfully" Sep 13 00:44:35.587365 containerd[1816]: time="2025-09-13T00:44:35.587347401Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:44:35.587416 containerd[1816]: time="2025-09-13T00:44:35.587383409Z" level=info msg="RemovePodSandbox \"30081fbff978d53591845f11ca55d062997b02ae791cd81f32b3fc3911497339\" returns successfully" Sep 13 00:44:35.587594 containerd[1816]: time="2025-09-13T00:44:35.587584641Z" level=info msg="StopPodSandbox for \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\"" Sep 13 00:44:35.621537 containerd[1816]: 2025-09-13 00:44:35.604 [WARNING][6909] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-whisker--677b4bc6cc--rc2zn-eth0" Sep 13 00:44:35.621537 containerd[1816]: 2025-09-13 00:44:35.604 [INFO][6909] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Sep 13 00:44:35.621537 containerd[1816]: 2025-09-13 00:44:35.604 [INFO][6909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" iface="eth0" netns="" Sep 13 00:44:35.621537 containerd[1816]: 2025-09-13 00:44:35.604 [INFO][6909] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Sep 13 00:44:35.621537 containerd[1816]: 2025-09-13 00:44:35.604 [INFO][6909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Sep 13 00:44:35.621537 containerd[1816]: 2025-09-13 00:44:35.614 [INFO][6929] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" HandleID="k8s-pod-network.bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Workload="ci--4081.3.5--n--2af8d06a22-k8s-whisker--677b4bc6cc--rc2zn-eth0" Sep 13 00:44:35.621537 containerd[1816]: 2025-09-13 00:44:35.615 [INFO][6929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:35.621537 containerd[1816]: 2025-09-13 00:44:35.615 [INFO][6929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:35.621537 containerd[1816]: 2025-09-13 00:44:35.619 [WARNING][6929] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" HandleID="k8s-pod-network.bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Workload="ci--4081.3.5--n--2af8d06a22-k8s-whisker--677b4bc6cc--rc2zn-eth0" Sep 13 00:44:35.621537 containerd[1816]: 2025-09-13 00:44:35.619 [INFO][6929] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" HandleID="k8s-pod-network.bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Workload="ci--4081.3.5--n--2af8d06a22-k8s-whisker--677b4bc6cc--rc2zn-eth0" Sep 13 00:44:35.621537 containerd[1816]: 2025-09-13 00:44:35.620 [INFO][6929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:35.621537 containerd[1816]: 2025-09-13 00:44:35.620 [INFO][6909] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Sep 13 00:44:35.621844 containerd[1816]: time="2025-09-13T00:44:35.621562328Z" level=info msg="TearDown network for sandbox \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\" successfully" Sep 13 00:44:35.621844 containerd[1816]: time="2025-09-13T00:44:35.621578043Z" level=info msg="StopPodSandbox for \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\" returns successfully" Sep 13 00:44:35.621881 containerd[1816]: time="2025-09-13T00:44:35.621839535Z" level=info msg="RemovePodSandbox for \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\"" Sep 13 00:44:35.621881 containerd[1816]: time="2025-09-13T00:44:35.621856384Z" level=info msg="Forcibly stopping sandbox \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\"" Sep 13 00:44:35.654783 containerd[1816]: 2025-09-13 00:44:35.638 [WARNING][6956] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" WorkloadEndpoint="ci--4081.3.5--n--2af8d06a22-k8s-whisker--677b4bc6cc--rc2zn-eth0" Sep 13 00:44:35.654783 containerd[1816]: 2025-09-13 00:44:35.638 [INFO][6956] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Sep 13 00:44:35.654783 containerd[1816]: 2025-09-13 00:44:35.638 [INFO][6956] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" iface="eth0" netns="" Sep 13 00:44:35.654783 containerd[1816]: 2025-09-13 00:44:35.638 [INFO][6956] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Sep 13 00:44:35.654783 containerd[1816]: 2025-09-13 00:44:35.638 [INFO][6956] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Sep 13 00:44:35.654783 containerd[1816]: 2025-09-13 00:44:35.648 [INFO][6972] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" HandleID="k8s-pod-network.bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Workload="ci--4081.3.5--n--2af8d06a22-k8s-whisker--677b4bc6cc--rc2zn-eth0" Sep 13 00:44:35.654783 containerd[1816]: 2025-09-13 00:44:35.648 [INFO][6972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:35.654783 containerd[1816]: 2025-09-13 00:44:35.648 [INFO][6972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:35.654783 containerd[1816]: 2025-09-13 00:44:35.652 [WARNING][6972] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" HandleID="k8s-pod-network.bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Workload="ci--4081.3.5--n--2af8d06a22-k8s-whisker--677b4bc6cc--rc2zn-eth0" Sep 13 00:44:35.654783 containerd[1816]: 2025-09-13 00:44:35.652 [INFO][6972] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" HandleID="k8s-pod-network.bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Workload="ci--4081.3.5--n--2af8d06a22-k8s-whisker--677b4bc6cc--rc2zn-eth0" Sep 13 00:44:35.654783 containerd[1816]: 2025-09-13 00:44:35.653 [INFO][6972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:35.654783 containerd[1816]: 2025-09-13 00:44:35.654 [INFO][6956] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58" Sep 13 00:44:35.654783 containerd[1816]: time="2025-09-13T00:44:35.654776629Z" level=info msg="TearDown network for sandbox \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\" successfully" Sep 13 00:44:35.656218 containerd[1816]: time="2025-09-13T00:44:35.656168987Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:44:35.656218 containerd[1816]: time="2025-09-13T00:44:35.656198703Z" level=info msg="RemovePodSandbox \"bac7472fdbe96c2ee546dcf435ed1a6eaf307e7942332c0e974bb50d9aaaef58\" returns successfully" Sep 13 00:44:35.656495 containerd[1816]: time="2025-09-13T00:44:35.656452787Z" level=info msg="StopPodSandbox for \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\"" Sep 13 00:44:35.693469 containerd[1816]: 2025-09-13 00:44:35.673 [WARNING][6999] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0", GenerateName:"calico-apiserver-6cdbb444-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cdbb444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e", Pod:"calico-apiserver-6cdbb444-7d78s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califae0ad6261b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:35.693469 containerd[1816]: 2025-09-13 00:44:35.673 [INFO][6999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Sep 13 00:44:35.693469 containerd[1816]: 2025-09-13 00:44:35.673 [INFO][6999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" iface="eth0" netns="" Sep 13 00:44:35.693469 containerd[1816]: 2025-09-13 00:44:35.673 [INFO][6999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Sep 13 00:44:35.693469 containerd[1816]: 2025-09-13 00:44:35.673 [INFO][6999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Sep 13 00:44:35.693469 containerd[1816]: 2025-09-13 00:44:35.685 [INFO][7015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" HandleID="k8s-pod-network.db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:35.693469 containerd[1816]: 2025-09-13 00:44:35.685 [INFO][7015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:35.693469 containerd[1816]: 2025-09-13 00:44:35.685 [INFO][7015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:35.693469 containerd[1816]: 2025-09-13 00:44:35.690 [WARNING][7015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" HandleID="k8s-pod-network.db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:35.693469 containerd[1816]: 2025-09-13 00:44:35.690 [INFO][7015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" HandleID="k8s-pod-network.db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:35.693469 containerd[1816]: 2025-09-13 00:44:35.691 [INFO][7015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:35.693469 containerd[1816]: 2025-09-13 00:44:35.692 [INFO][6999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Sep 13 00:44:35.693469 containerd[1816]: time="2025-09-13T00:44:35.693451875Z" level=info msg="TearDown network for sandbox \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\" successfully" Sep 13 00:44:35.693469 containerd[1816]: time="2025-09-13T00:44:35.693468418Z" level=info msg="StopPodSandbox for \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\" returns successfully" Sep 13 00:44:35.694107 containerd[1816]: time="2025-09-13T00:44:35.693725652Z" level=info msg="RemovePodSandbox for \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\"" Sep 13 00:44:35.694107 containerd[1816]: time="2025-09-13T00:44:35.693745447Z" level=info msg="Forcibly stopping sandbox \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\"" Sep 13 00:44:35.733791 containerd[1816]: 2025-09-13 00:44:35.714 [WARNING][7044] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0", GenerateName:"calico-apiserver-6cdbb444-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6aa3a1d-cf6a-40b2-a42d-f5eda57546a0", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cdbb444", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"4853535e2e0a44879eb2f4639278019ed489102eac4a8ef5efb2ec13fbd1672e", Pod:"calico-apiserver-6cdbb444-7d78s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.77.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califae0ad6261b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:35.733791 containerd[1816]: 2025-09-13 00:44:35.714 [INFO][7044] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Sep 13 00:44:35.733791 containerd[1816]: 2025-09-13 00:44:35.714 [INFO][7044] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" iface="eth0" netns="" Sep 13 00:44:35.733791 containerd[1816]: 2025-09-13 00:44:35.714 [INFO][7044] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Sep 13 00:44:35.733791 containerd[1816]: 2025-09-13 00:44:35.714 [INFO][7044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Sep 13 00:44:35.733791 containerd[1816]: 2025-09-13 00:44:35.726 [INFO][7060] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" HandleID="k8s-pod-network.db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:35.733791 containerd[1816]: 2025-09-13 00:44:35.726 [INFO][7060] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:35.733791 containerd[1816]: 2025-09-13 00:44:35.726 [INFO][7060] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:35.733791 containerd[1816]: 2025-09-13 00:44:35.730 [WARNING][7060] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" HandleID="k8s-pod-network.db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:35.733791 containerd[1816]: 2025-09-13 00:44:35.730 [INFO][7060] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" HandleID="k8s-pod-network.db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--apiserver--6cdbb444--7d78s-eth0" Sep 13 00:44:35.733791 containerd[1816]: 2025-09-13 00:44:35.732 [INFO][7060] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:35.733791 containerd[1816]: 2025-09-13 00:44:35.732 [INFO][7044] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba" Sep 13 00:44:35.734149 containerd[1816]: time="2025-09-13T00:44:35.733820948Z" level=info msg="TearDown network for sandbox \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\" successfully" Sep 13 00:44:35.735358 containerd[1816]: time="2025-09-13T00:44:35.735346103Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:44:35.735392 containerd[1816]: time="2025-09-13T00:44:35.735378035Z" level=info msg="RemovePodSandbox \"db193b2cc374b2a1df5833be8a3a2b1470ec947203bc4377f69ab1e5dbfec3ba\" returns successfully" Sep 13 00:44:35.735650 containerd[1816]: time="2025-09-13T00:44:35.735636417Z" level=info msg="StopPodSandbox for \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\"" Sep 13 00:44:35.770247 containerd[1816]: 2025-09-13 00:44:35.753 [WARNING][7087] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3e12566-608c-47e4-9de0-c2f38136e9e0", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982", Pod:"csi-node-driver-9v25t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9b380d2f20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:35.770247 containerd[1816]: 2025-09-13 00:44:35.753 [INFO][7087] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Sep 13 00:44:35.770247 containerd[1816]: 2025-09-13 00:44:35.753 [INFO][7087] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" iface="eth0" netns="" Sep 13 00:44:35.770247 containerd[1816]: 2025-09-13 00:44:35.753 [INFO][7087] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Sep 13 00:44:35.770247 containerd[1816]: 2025-09-13 00:44:35.753 [INFO][7087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Sep 13 00:44:35.770247 containerd[1816]: 2025-09-13 00:44:35.763 [INFO][7103] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" HandleID="k8s-pod-network.d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Workload="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:35.770247 containerd[1816]: 2025-09-13 00:44:35.763 [INFO][7103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:35.770247 containerd[1816]: 2025-09-13 00:44:35.763 [INFO][7103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:35.770247 containerd[1816]: 2025-09-13 00:44:35.767 [WARNING][7103] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" HandleID="k8s-pod-network.d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Workload="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:35.770247 containerd[1816]: 2025-09-13 00:44:35.767 [INFO][7103] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" HandleID="k8s-pod-network.d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Workload="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:35.770247 containerd[1816]: 2025-09-13 00:44:35.768 [INFO][7103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:35.770247 containerd[1816]: 2025-09-13 00:44:35.769 [INFO][7087] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Sep 13 00:44:35.770247 containerd[1816]: time="2025-09-13T00:44:35.770244806Z" level=info msg="TearDown network for sandbox \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\" successfully" Sep 13 00:44:35.770547 containerd[1816]: time="2025-09-13T00:44:35.770262254Z" level=info msg="StopPodSandbox for \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\" returns successfully" Sep 13 00:44:35.770547 containerd[1816]: time="2025-09-13T00:44:35.770513900Z" level=info msg="RemovePodSandbox for \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\"" Sep 13 00:44:35.770547 containerd[1816]: time="2025-09-13T00:44:35.770531145Z" level=info msg="Forcibly stopping sandbox \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\"" Sep 13 00:44:35.804923 containerd[1816]: 2025-09-13 00:44:35.788 [WARNING][7126] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3e12566-608c-47e4-9de0-c2f38136e9e0", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"341601b79aabcf009a684234f20a237639b69aab6ac055c87f4dc0c0f30b1982", Pod:"csi-node-driver-9v25t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.77.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic9b380d2f20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:35.804923 containerd[1816]: 2025-09-13 00:44:35.788 [INFO][7126] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Sep 13 00:44:35.804923 containerd[1816]: 2025-09-13 00:44:35.788 [INFO][7126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" iface="eth0" netns="" Sep 13 00:44:35.804923 containerd[1816]: 2025-09-13 00:44:35.788 [INFO][7126] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Sep 13 00:44:35.804923 containerd[1816]: 2025-09-13 00:44:35.788 [INFO][7126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Sep 13 00:44:35.804923 containerd[1816]: 2025-09-13 00:44:35.798 [INFO][7142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" HandleID="k8s-pod-network.d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Workload="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:35.804923 containerd[1816]: 2025-09-13 00:44:35.798 [INFO][7142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:35.804923 containerd[1816]: 2025-09-13 00:44:35.798 [INFO][7142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:35.804923 containerd[1816]: 2025-09-13 00:44:35.802 [WARNING][7142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" HandleID="k8s-pod-network.d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Workload="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:35.804923 containerd[1816]: 2025-09-13 00:44:35.802 [INFO][7142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" HandleID="k8s-pod-network.d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Workload="ci--4081.3.5--n--2af8d06a22-k8s-csi--node--driver--9v25t-eth0" Sep 13 00:44:35.804923 containerd[1816]: 2025-09-13 00:44:35.803 [INFO][7142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:35.804923 containerd[1816]: 2025-09-13 00:44:35.804 [INFO][7126] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc" Sep 13 00:44:35.805244 containerd[1816]: time="2025-09-13T00:44:35.804937444Z" level=info msg="TearDown network for sandbox \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\" successfully" Sep 13 00:44:35.806960 containerd[1816]: time="2025-09-13T00:44:35.806945830Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:44:35.806997 containerd[1816]: time="2025-09-13T00:44:35.806977171Z" level=info msg="RemovePodSandbox \"d472c03a240a48ad12653178c3e9b54257504bc202f36320cd86e3960a184cdc\" returns successfully" Sep 13 00:44:35.807231 containerd[1816]: time="2025-09-13T00:44:35.807218830Z" level=info msg="StopPodSandbox for \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\"" Sep 13 00:44:35.841781 containerd[1816]: 2025-09-13 00:44:35.824 [WARNING][7165] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb45447f-5bf3-45bc-8523-f45a74830305", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba", Pod:"coredns-7c65d6cfc9-crrvn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia619552ec4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:35.841781 containerd[1816]: 2025-09-13 00:44:35.824 [INFO][7165] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Sep 13 00:44:35.841781 containerd[1816]: 2025-09-13 00:44:35.824 [INFO][7165] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" iface="eth0" netns="" Sep 13 00:44:35.841781 containerd[1816]: 2025-09-13 00:44:35.824 [INFO][7165] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Sep 13 00:44:35.841781 containerd[1816]: 2025-09-13 00:44:35.824 [INFO][7165] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Sep 13 00:44:35.841781 containerd[1816]: 2025-09-13 00:44:35.836 [INFO][7182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" HandleID="k8s-pod-network.03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:35.841781 containerd[1816]: 2025-09-13 00:44:35.836 [INFO][7182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:35.841781 containerd[1816]: 2025-09-13 00:44:35.836 [INFO][7182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:35.841781 containerd[1816]: 2025-09-13 00:44:35.839 [WARNING][7182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" HandleID="k8s-pod-network.03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:35.841781 containerd[1816]: 2025-09-13 00:44:35.839 [INFO][7182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" HandleID="k8s-pod-network.03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:35.841781 containerd[1816]: 2025-09-13 00:44:35.840 [INFO][7182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:35.841781 containerd[1816]: 2025-09-13 00:44:35.841 [INFO][7165] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Sep 13 00:44:35.842310 containerd[1816]: time="2025-09-13T00:44:35.841809766Z" level=info msg="TearDown network for sandbox \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\" successfully" Sep 13 00:44:35.842310 containerd[1816]: time="2025-09-13T00:44:35.841832110Z" level=info msg="StopPodSandbox for \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\" returns successfully" Sep 13 00:44:35.842310 containerd[1816]: time="2025-09-13T00:44:35.842112496Z" level=info msg="RemovePodSandbox for \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\"" Sep 13 00:44:35.842310 containerd[1816]: time="2025-09-13T00:44:35.842128407Z" level=info msg="Forcibly stopping sandbox \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\"" Sep 13 00:44:35.877629 containerd[1816]: 2025-09-13 00:44:35.859 [WARNING][7207] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eb45447f-5bf3-45bc-8523-f45a74830305", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"69af5b62ed23e87bd305304bdde7196ea5bd590cf0ab36b8fa65b9a5fa99aeba", Pod:"coredns-7c65d6cfc9-crrvn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia619552ec4b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:35.877629 containerd[1816]: 2025-09-13 00:44:35.859 [INFO][7207] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Sep 13 00:44:35.877629 containerd[1816]: 2025-09-13 00:44:35.859 [INFO][7207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" iface="eth0" netns="" Sep 13 00:44:35.877629 containerd[1816]: 2025-09-13 00:44:35.859 [INFO][7207] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Sep 13 00:44:35.877629 containerd[1816]: 2025-09-13 00:44:35.859 [INFO][7207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Sep 13 00:44:35.877629 containerd[1816]: 2025-09-13 00:44:35.870 [INFO][7222] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" HandleID="k8s-pod-network.03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:35.877629 containerd[1816]: 2025-09-13 00:44:35.870 [INFO][7222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:35.877629 containerd[1816]: 2025-09-13 00:44:35.870 [INFO][7222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:35.877629 containerd[1816]: 2025-09-13 00:44:35.874 [WARNING][7222] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" HandleID="k8s-pod-network.03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:35.877629 containerd[1816]: 2025-09-13 00:44:35.874 [INFO][7222] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" HandleID="k8s-pod-network.03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--crrvn-eth0" Sep 13 00:44:35.877629 containerd[1816]: 2025-09-13 00:44:35.875 [INFO][7222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:35.877629 containerd[1816]: 2025-09-13 00:44:35.876 [INFO][7207] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac" Sep 13 00:44:35.877629 containerd[1816]: time="2025-09-13T00:44:35.877621695Z" level=info msg="TearDown network for sandbox \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\" successfully" Sep 13 00:44:35.879269 containerd[1816]: time="2025-09-13T00:44:35.879254948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:44:35.879300 containerd[1816]: time="2025-09-13T00:44:35.879285296Z" level=info msg="RemovePodSandbox \"03643748798d09e3f63bc6284989b0822509ecda74439976117e714c04d433ac\" returns successfully" Sep 13 00:44:35.879537 containerd[1816]: time="2025-09-13T00:44:35.879526039Z" level=info msg="StopPodSandbox for \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\"" Sep 13 00:44:35.913886 containerd[1816]: 2025-09-13 00:44:35.896 [WARNING][7250] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0", GenerateName:"calico-kube-controllers-6d7d549755-", Namespace:"calico-system", SelfLink:"", UID:"72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d7d549755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f", Pod:"calico-kube-controllers-6d7d549755-rwftt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali565053d1837", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:35.913886 containerd[1816]: 2025-09-13 00:44:35.896 [INFO][7250] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Sep 13 00:44:35.913886 containerd[1816]: 2025-09-13 00:44:35.896 [INFO][7250] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" iface="eth0" netns="" Sep 13 00:44:35.913886 containerd[1816]: 2025-09-13 00:44:35.896 [INFO][7250] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Sep 13 00:44:35.913886 containerd[1816]: 2025-09-13 00:44:35.896 [INFO][7250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Sep 13 00:44:35.913886 containerd[1816]: 2025-09-13 00:44:35.908 [INFO][7267] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" HandleID="k8s-pod-network.38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:35.913886 containerd[1816]: 2025-09-13 00:44:35.908 [INFO][7267] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:35.913886 containerd[1816]: 2025-09-13 00:44:35.908 [INFO][7267] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:35.913886 containerd[1816]: 2025-09-13 00:44:35.911 [WARNING][7267] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" HandleID="k8s-pod-network.38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:35.913886 containerd[1816]: 2025-09-13 00:44:35.911 [INFO][7267] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" HandleID="k8s-pod-network.38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:35.913886 containerd[1816]: 2025-09-13 00:44:35.912 [INFO][7267] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:35.913886 containerd[1816]: 2025-09-13 00:44:35.913 [INFO][7250] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Sep 13 00:44:35.914233 containerd[1816]: time="2025-09-13T00:44:35.913909557Z" level=info msg="TearDown network for sandbox \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\" successfully" Sep 13 00:44:35.914233 containerd[1816]: time="2025-09-13T00:44:35.913925618Z" level=info msg="StopPodSandbox for \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\" returns successfully" Sep 13 00:44:35.914233 containerd[1816]: time="2025-09-13T00:44:35.914198130Z" level=info msg="RemovePodSandbox for \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\"" Sep 13 00:44:35.914233 containerd[1816]: time="2025-09-13T00:44:35.914216104Z" level=info msg="Forcibly stopping sandbox \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\"" Sep 13 00:44:35.947392 containerd[1816]: 2025-09-13 00:44:35.930 [WARNING][7290] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0", GenerateName:"calico-kube-controllers-6d7d549755-", Namespace:"calico-system", SelfLink:"", UID:"72b6dc4c-0f55-48ba-b4eb-7772e5d7ba11", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d7d549755", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"9a2fd5712a3e789136228b741a645876b3e7cfe7e164c68795396f7ae9a78f8f", Pod:"calico-kube-controllers-6d7d549755-rwftt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.77.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali565053d1837", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:35.947392 containerd[1816]: 2025-09-13 00:44:35.931 [INFO][7290] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Sep 13 00:44:35.947392 containerd[1816]: 2025-09-13 00:44:35.931 [INFO][7290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" iface="eth0" netns="" Sep 13 00:44:35.947392 containerd[1816]: 2025-09-13 00:44:35.931 [INFO][7290] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Sep 13 00:44:35.947392 containerd[1816]: 2025-09-13 00:44:35.931 [INFO][7290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Sep 13 00:44:35.947392 containerd[1816]: 2025-09-13 00:44:35.940 [INFO][7305] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" HandleID="k8s-pod-network.38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:35.947392 containerd[1816]: 2025-09-13 00:44:35.941 [INFO][7305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:35.947392 containerd[1816]: 2025-09-13 00:44:35.941 [INFO][7305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:35.947392 containerd[1816]: 2025-09-13 00:44:35.945 [WARNING][7305] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" HandleID="k8s-pod-network.38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:35.947392 containerd[1816]: 2025-09-13 00:44:35.945 [INFO][7305] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" HandleID="k8s-pod-network.38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Workload="ci--4081.3.5--n--2af8d06a22-k8s-calico--kube--controllers--6d7d549755--rwftt-eth0" Sep 13 00:44:35.947392 containerd[1816]: 2025-09-13 00:44:35.945 [INFO][7305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:35.947392 containerd[1816]: 2025-09-13 00:44:35.946 [INFO][7290] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33" Sep 13 00:44:35.947392 containerd[1816]: time="2025-09-13T00:44:35.947379388Z" level=info msg="TearDown network for sandbox \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\" successfully" Sep 13 00:44:35.949247 containerd[1816]: time="2025-09-13T00:44:35.949231811Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:44:35.949297 containerd[1816]: time="2025-09-13T00:44:35.949269206Z" level=info msg="RemovePodSandbox \"38ab005ba3b01ec5ac217c8c07611f48fc5511e3be06030f1628a6fd9c5e6b33\" returns successfully" Sep 13 00:44:35.949542 containerd[1816]: time="2025-09-13T00:44:35.949530458Z" level=info msg="StopPodSandbox for \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\"" Sep 13 00:44:35.983615 containerd[1816]: 2025-09-13 00:44:35.966 [WARNING][7329] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7760760a-7878-4b4e-8517-c3f27b4755d1", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1", Pod:"coredns-7c65d6cfc9-sb6sx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6d33eacf72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:35.983615 containerd[1816]: 2025-09-13 00:44:35.966 [INFO][7329] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Sep 13 00:44:35.983615 containerd[1816]: 2025-09-13 00:44:35.966 [INFO][7329] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" iface="eth0" netns="" Sep 13 00:44:35.983615 containerd[1816]: 2025-09-13 00:44:35.966 [INFO][7329] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Sep 13 00:44:35.983615 containerd[1816]: 2025-09-13 00:44:35.966 [INFO][7329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Sep 13 00:44:35.983615 containerd[1816]: 2025-09-13 00:44:35.976 [INFO][7347] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" HandleID="k8s-pod-network.b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:35.983615 containerd[1816]: 2025-09-13 00:44:35.976 [INFO][7347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:35.983615 containerd[1816]: 2025-09-13 00:44:35.976 [INFO][7347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:35.983615 containerd[1816]: 2025-09-13 00:44:35.980 [WARNING][7347] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" HandleID="k8s-pod-network.b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:35.983615 containerd[1816]: 2025-09-13 00:44:35.980 [INFO][7347] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" HandleID="k8s-pod-network.b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:35.983615 containerd[1816]: 2025-09-13 00:44:35.982 [INFO][7347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:35.983615 containerd[1816]: 2025-09-13 00:44:35.982 [INFO][7329] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Sep 13 00:44:35.983965 containerd[1816]: time="2025-09-13T00:44:35.983621139Z" level=info msg="TearDown network for sandbox \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\" successfully" Sep 13 00:44:35.983965 containerd[1816]: time="2025-09-13T00:44:35.983638280Z" level=info msg="StopPodSandbox for \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\" returns successfully" Sep 13 00:44:35.983965 containerd[1816]: time="2025-09-13T00:44:35.983927946Z" level=info msg="RemovePodSandbox for \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\"" Sep 13 00:44:35.983965 containerd[1816]: time="2025-09-13T00:44:35.983952979Z" level=info msg="Forcibly stopping sandbox \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\"" Sep 13 00:44:36.018193 containerd[1816]: 2025-09-13 00:44:36.001 [WARNING][7374] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7760760a-7878-4b4e-8517-c3f27b4755d1", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"6ae9f623824251dd46174b2f6b5bde86274dbc32aa306caefea70cc953e8a9f1", Pod:"coredns-7c65d6cfc9-sb6sx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.77.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie6d33eacf72", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:36.018193 containerd[1816]: 2025-09-13 00:44:36.001 [INFO][7374] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Sep 13 00:44:36.018193 containerd[1816]: 2025-09-13 00:44:36.001 [INFO][7374] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" iface="eth0" netns="" Sep 13 00:44:36.018193 containerd[1816]: 2025-09-13 00:44:36.001 [INFO][7374] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Sep 13 00:44:36.018193 containerd[1816]: 2025-09-13 00:44:36.001 [INFO][7374] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Sep 13 00:44:36.018193 containerd[1816]: 2025-09-13 00:44:36.011 [INFO][7390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" HandleID="k8s-pod-network.b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:36.018193 containerd[1816]: 2025-09-13 00:44:36.012 [INFO][7390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:36.018193 containerd[1816]: 2025-09-13 00:44:36.012 [INFO][7390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:36.018193 containerd[1816]: 2025-09-13 00:44:36.015 [WARNING][7390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" HandleID="k8s-pod-network.b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:36.018193 containerd[1816]: 2025-09-13 00:44:36.015 [INFO][7390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" HandleID="k8s-pod-network.b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Workload="ci--4081.3.5--n--2af8d06a22-k8s-coredns--7c65d6cfc9--sb6sx-eth0" Sep 13 00:44:36.018193 containerd[1816]: 2025-09-13 00:44:36.016 [INFO][7390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:36.018193 containerd[1816]: 2025-09-13 00:44:36.017 [INFO][7374] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25" Sep 13 00:44:36.018485 containerd[1816]: time="2025-09-13T00:44:36.018216036Z" level=info msg="TearDown network for sandbox \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\" successfully" Sep 13 00:44:36.019927 containerd[1816]: time="2025-09-13T00:44:36.019912394Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:44:36.019964 containerd[1816]: time="2025-09-13T00:44:36.019944793Z" level=info msg="RemovePodSandbox \"b93127de8bb068ab4a5af683c8f46255516499c6a128a9dc1ecb93fc89838c25\" returns successfully" Sep 13 00:44:36.020203 containerd[1816]: time="2025-09-13T00:44:36.020193119Z" level=info msg="StopPodSandbox for \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\"" Sep 13 00:44:36.054644 containerd[1816]: 2025-09-13 00:44:36.038 [WARNING][7414] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"fc068cc0-2f07-4c24-a603-b6967550f6d8", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898", Pod:"goldmane-7988f88666-bjldk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.77.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76b825a70a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:36.054644 containerd[1816]: 2025-09-13 00:44:36.038 [INFO][7414] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Sep 13 00:44:36.054644 containerd[1816]: 2025-09-13 00:44:36.038 [INFO][7414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" iface="eth0" netns="" Sep 13 00:44:36.054644 containerd[1816]: 2025-09-13 00:44:36.038 [INFO][7414] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Sep 13 00:44:36.054644 containerd[1816]: 2025-09-13 00:44:36.038 [INFO][7414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Sep 13 00:44:36.054644 containerd[1816]: 2025-09-13 00:44:36.048 [INFO][7434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" HandleID="k8s-pod-network.1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Workload="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:36.054644 containerd[1816]: 2025-09-13 00:44:36.048 [INFO][7434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:36.054644 containerd[1816]: 2025-09-13 00:44:36.048 [INFO][7434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:36.054644 containerd[1816]: 2025-09-13 00:44:36.051 [WARNING][7434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" HandleID="k8s-pod-network.1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Workload="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:36.054644 containerd[1816]: 2025-09-13 00:44:36.052 [INFO][7434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" HandleID="k8s-pod-network.1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Workload="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:36.054644 containerd[1816]: 2025-09-13 00:44:36.053 [INFO][7434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:36.054644 containerd[1816]: 2025-09-13 00:44:36.053 [INFO][7414] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Sep 13 00:44:36.055000 containerd[1816]: time="2025-09-13T00:44:36.054662363Z" level=info msg="TearDown network for sandbox \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\" successfully" Sep 13 00:44:36.055000 containerd[1816]: time="2025-09-13T00:44:36.054677463Z" level=info msg="StopPodSandbox for \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\" returns successfully" Sep 13 00:44:36.055000 containerd[1816]: time="2025-09-13T00:44:36.054926963Z" level=info msg="RemovePodSandbox for \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\"" Sep 13 00:44:36.055000 containerd[1816]: time="2025-09-13T00:44:36.054941836Z" level=info msg="Forcibly stopping sandbox \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\"" Sep 13 00:44:36.089408 containerd[1816]: 2025-09-13 00:44:36.071 [WARNING][7460] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"fc068cc0-2f07-4c24-a603-b6967550f6d8", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 43, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-2af8d06a22", ContainerID:"91aec9432d7e40947f4015413e511cec7ff532e83ba638ab21a509b26354f898", Pod:"goldmane-7988f88666-bjldk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.77.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76b825a70a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:44:36.089408 containerd[1816]: 2025-09-13 00:44:36.071 [INFO][7460] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Sep 13 00:44:36.089408 containerd[1816]: 2025-09-13 00:44:36.072 [INFO][7460] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" iface="eth0" netns="" Sep 13 00:44:36.089408 containerd[1816]: 2025-09-13 00:44:36.072 [INFO][7460] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Sep 13 00:44:36.089408 containerd[1816]: 2025-09-13 00:44:36.072 [INFO][7460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Sep 13 00:44:36.089408 containerd[1816]: 2025-09-13 00:44:36.082 [INFO][7475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" HandleID="k8s-pod-network.1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Workload="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:36.089408 containerd[1816]: 2025-09-13 00:44:36.082 [INFO][7475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:44:36.089408 containerd[1816]: 2025-09-13 00:44:36.082 [INFO][7475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:44:36.089408 containerd[1816]: 2025-09-13 00:44:36.086 [WARNING][7475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" HandleID="k8s-pod-network.1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Workload="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:36.089408 containerd[1816]: 2025-09-13 00:44:36.086 [INFO][7475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" HandleID="k8s-pod-network.1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Workload="ci--4081.3.5--n--2af8d06a22-k8s-goldmane--7988f88666--bjldk-eth0" Sep 13 00:44:36.089408 containerd[1816]: 2025-09-13 00:44:36.087 [INFO][7475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:44:36.089408 containerd[1816]: 2025-09-13 00:44:36.088 [INFO][7460] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe" Sep 13 00:44:36.089408 containerd[1816]: time="2025-09-13T00:44:36.089396294Z" level=info msg="TearDown network for sandbox \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\" successfully" Sep 13 00:44:36.091127 containerd[1816]: time="2025-09-13T00:44:36.091084927Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:44:36.091127 containerd[1816]: time="2025-09-13T00:44:36.091116331Z" level=info msg="RemovePodSandbox \"1ec7edd2eae047abf5d2011fbb19fbb805cae3afb61a2ed475e4e5a963d8efbe\" returns successfully" Sep 13 00:49:47.784414 update_engine[1804]: I20250913 00:49:47.784273 1804 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 13 00:49:47.784414 update_engine[1804]: I20250913 00:49:47.784374 1804 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 13 00:49:47.786524 update_engine[1804]: I20250913 00:49:47.784758 1804 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 13 00:49:47.786524 update_engine[1804]: I20250913 00:49:47.785876 1804 omaha_request_params.cc:62] Current group set to lts Sep 13 00:49:47.786524 update_engine[1804]: I20250913 00:49:47.786132 1804 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 13 00:49:47.786524 update_engine[1804]: I20250913 00:49:47.786167 1804 update_attempter.cc:643] Scheduling an action processor start. Sep 13 00:49:47.786524 update_engine[1804]: I20250913 00:49:47.786206 1804 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:49:47.786524 update_engine[1804]: I20250913 00:49:47.786291 1804 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 13 00:49:47.786524 update_engine[1804]: I20250913 00:49:47.786397 1804 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:49:47.786524 update_engine[1804]: I20250913 00:49:47.786410 1804 omaha_request_action.cc:272] Request: Sep 13 00:49:47.786524 update_engine[1804]: Sep 13 00:49:47.786524 update_engine[1804]: Sep 13 00:49:47.786524 update_engine[1804]: Sep 13 00:49:47.786524 update_engine[1804]: Sep 13 00:49:47.786524 update_engine[1804]: Sep 13 00:49:47.786524 update_engine[1804]: Sep 13 00:49:47.786524 update_engine[1804]: Sep 13 00:49:47.786524 update_engine[1804]: Sep 13 00:49:47.786524 update_engine[1804]: I20250913 00:49:47.786418 1804 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:49:47.787084 locksmithd[1852]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 13 00:49:47.787921 update_engine[1804]: I20250913 00:49:47.787871 1804 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:49:47.788258 update_engine[1804]: I20250913 00:49:47.788196 1804 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:49:47.788849 update_engine[1804]: E20250913 00:49:47.788794 1804 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:49:47.788931 update_engine[1804]: I20250913 00:49:47.788861 1804 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 13 00:49:57.695109 update_engine[1804]: I20250913 00:49:57.694983 1804 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:49:57.695696 update_engine[1804]: I20250913 00:49:57.695359 1804 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:49:57.695793 update_engine[1804]: I20250913 00:49:57.695691 1804 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:49:57.696419 update_engine[1804]: E20250913 00:49:57.696331 1804 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:49:57.696558 update_engine[1804]: I20250913 00:49:57.696436 1804 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 13 00:50:07.694784 update_engine[1804]: I20250913 00:50:07.694653 1804 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:50:07.695402 update_engine[1804]: I20250913 00:50:07.695003 1804 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:50:07.695402 update_engine[1804]: I20250913 00:50:07.695360 1804 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:50:07.696165 update_engine[1804]: E20250913 00:50:07.696079 1804 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:50:07.696313 update_engine[1804]: I20250913 00:50:07.696166 1804 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 13 00:50:17.694348 update_engine[1804]: I20250913 00:50:17.694221 1804 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:50:17.694934 update_engine[1804]: I20250913 00:50:17.694559 1804 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:50:17.694934 update_engine[1804]: I20250913 00:50:17.694888 1804 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:50:17.695602 update_engine[1804]: E20250913 00:50:17.695501 1804 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:50:17.695755 update_engine[1804]: I20250913 00:50:17.695602 1804 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:50:17.695755 update_engine[1804]: I20250913 00:50:17.695626 1804 omaha_request_action.cc:617] Omaha request response: Sep 13 00:50:17.695891 update_engine[1804]: E20250913 00:50:17.695750 1804 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 13 00:50:17.695891 update_engine[1804]: I20250913 00:50:17.695789 1804 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 13 00:50:17.695891 update_engine[1804]: I20250913 00:50:17.695803 1804 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:50:17.695891 update_engine[1804]: I20250913 00:50:17.695813 1804 update_attempter.cc:306] Processing Done. Sep 13 00:50:17.695891 update_engine[1804]: E20250913 00:50:17.695837 1804 update_attempter.cc:619] Update failed. Sep 13 00:50:17.695891 update_engine[1804]: I20250913 00:50:17.695849 1804 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 13 00:50:17.695891 update_engine[1804]: I20250913 00:50:17.695860 1804 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 13 00:50:17.695891 update_engine[1804]: I20250913 00:50:17.695871 1804 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 13 00:50:17.696394 update_engine[1804]: I20250913 00:50:17.695986 1804 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 13 00:50:17.696394 update_engine[1804]: I20250913 00:50:17.696058 1804 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 13 00:50:17.696394 update_engine[1804]: I20250913 00:50:17.696080 1804 omaha_request_action.cc:272] Request: Sep 13 00:50:17.696394 update_engine[1804]: Sep 13 00:50:17.696394 update_engine[1804]: Sep 13 00:50:17.696394 update_engine[1804]: Sep 13 00:50:17.696394 update_engine[1804]: Sep 13 00:50:17.696394 update_engine[1804]: Sep 13 00:50:17.696394 update_engine[1804]: Sep 13 00:50:17.696394 update_engine[1804]: I20250913 00:50:17.696094 1804 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 13 00:50:17.696394 update_engine[1804]: I20250913 00:50:17.696386 1804 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 13 00:50:17.696995 update_engine[1804]: I20250913 00:50:17.696700 1804 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 13 00:50:17.697076 locksmithd[1852]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 13 00:50:17.697552 update_engine[1804]: E20250913 00:50:17.697252 1804 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 13 00:50:17.697552 update_engine[1804]: I20250913 00:50:17.697331 1804 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 13 00:50:17.697552 update_engine[1804]: I20250913 00:50:17.697350 1804 omaha_request_action.cc:617] Omaha request response: Sep 13 00:50:17.697552 update_engine[1804]: I20250913 00:50:17.697364 1804 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:50:17.697552 update_engine[1804]: I20250913 00:50:17.697376 1804 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 13 00:50:17.697552 update_engine[1804]: I20250913 00:50:17.697386 1804 update_attempter.cc:306] Processing Done. Sep 13 00:50:17.697552 update_engine[1804]: I20250913 00:50:17.697398 1804 update_attempter.cc:310] Error event sent. Sep 13 00:50:17.697552 update_engine[1804]: I20250913 00:50:17.697415 1804 update_check_scheduler.cc:74] Next update check in 43m5s Sep 13 00:50:17.698046 locksmithd[1852]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 13 00:50:25.289793 systemd[1]: Started sshd@9-139.178.94.199:22-139.178.89.65:33340.service - OpenSSH per-connection server daemon (139.178.89.65:33340). Sep 13 00:50:25.336368 sshd[8912]: Accepted publickey for core from 139.178.89.65 port 33340 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:50:25.338120 sshd[8912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:50:25.343613 systemd-logind[1799]: New session 12 of user core. Sep 13 00:50:25.361338 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:50:25.513389 sshd[8912]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:25.515203 systemd[1]: sshd@9-139.178.94.199:22-139.178.89.65:33340.service: Deactivated successfully. Sep 13 00:50:25.516317 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:50:25.517252 systemd-logind[1799]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:50:25.517944 systemd-logind[1799]: Removed session 12. Sep 13 00:50:30.531364 systemd[1]: Started sshd@10-139.178.94.199:22-139.178.89.65:49118.service - OpenSSH per-connection server daemon (139.178.89.65:49118). Sep 13 00:50:30.562661 sshd[8979]: Accepted publickey for core from 139.178.89.65 port 49118 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:50:30.563559 sshd[8979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:50:30.566211 systemd-logind[1799]: New session 13 of user core. Sep 13 00:50:30.587210 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:50:30.668881 sshd[8979]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:30.671097 systemd[1]: sshd@10-139.178.94.199:22-139.178.89.65:49118.service: Deactivated successfully. Sep 13 00:50:30.672251 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:50:30.672679 systemd-logind[1799]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:50:30.673283 systemd-logind[1799]: Removed session 13. Sep 13 00:50:35.693616 systemd[1]: Started sshd@11-139.178.94.199:22-139.178.89.65:49128.service - OpenSSH per-connection server daemon (139.178.89.65:49128). Sep 13 00:50:35.757383 sshd[9025]: Accepted publickey for core from 139.178.89.65 port 49128 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:50:35.758867 sshd[9025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:50:35.763788 systemd-logind[1799]: New session 14 of user core. Sep 13 00:50:35.774306 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:50:35.909374 sshd[9025]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:35.932058 systemd[1]: sshd@11-139.178.94.199:22-139.178.89.65:49128.service: Deactivated successfully. Sep 13 00:50:35.936360 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:50:35.940057 systemd-logind[1799]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:50:35.953841 systemd[1]: Started sshd@12-139.178.94.199:22-139.178.89.65:49132.service - OpenSSH per-connection server daemon (139.178.89.65:49132). Sep 13 00:50:35.956507 systemd-logind[1799]: Removed session 14. Sep 13 00:50:36.010395 sshd[9053]: Accepted publickey for core from 139.178.89.65 port 49132 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:50:36.011856 sshd[9053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:50:36.016934 systemd-logind[1799]: New session 15 of user core. Sep 13 00:50:36.033386 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:50:36.180112 sshd[9053]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:36.191827 systemd[1]: sshd@12-139.178.94.199:22-139.178.89.65:49132.service: Deactivated successfully. Sep 13 00:50:36.192708 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:50:36.193428 systemd-logind[1799]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:50:36.194106 systemd[1]: Started sshd@13-139.178.94.199:22-139.178.89.65:49144.service - OpenSSH per-connection server daemon (139.178.89.65:49144). Sep 13 00:50:36.194594 systemd-logind[1799]: Removed session 15. Sep 13 00:50:36.226478 sshd[9079]: Accepted publickey for core from 139.178.89.65 port 49144 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:50:36.229922 sshd[9079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:50:36.241581 systemd-logind[1799]: New session 16 of user core. Sep 13 00:50:36.264416 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:50:36.400182 sshd[9079]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:36.402231 systemd[1]: sshd@13-139.178.94.199:22-139.178.89.65:49144.service: Deactivated successfully. Sep 13 00:50:36.403181 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:50:36.403556 systemd-logind[1799]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:50:36.404009 systemd-logind[1799]: Removed session 16. Sep 13 00:50:41.419874 systemd[1]: Started sshd@14-139.178.94.199:22-139.178.89.65:44714.service - OpenSSH per-connection server daemon (139.178.89.65:44714). Sep 13 00:50:41.451268 sshd[9214]: Accepted publickey for core from 139.178.89.65 port 44714 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:50:41.452297 sshd[9214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:50:41.455513 systemd-logind[1799]: New session 17 of user core. Sep 13 00:50:41.474281 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:50:41.567155 sshd[9214]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:41.568800 systemd[1]: sshd@14-139.178.94.199:22-139.178.89.65:44714.service: Deactivated successfully. Sep 13 00:50:41.569799 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:50:41.570559 systemd-logind[1799]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:50:41.571205 systemd-logind[1799]: Removed session 17. Sep 13 00:50:46.592046 systemd[1]: Started sshd@15-139.178.94.199:22-139.178.89.65:44716.service - OpenSSH per-connection server daemon (139.178.89.65:44716). Sep 13 00:50:46.623409 sshd[9242]: Accepted publickey for core from 139.178.89.65 port 44716 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:50:46.624437 sshd[9242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:50:46.627742 systemd-logind[1799]: New session 18 of user core. Sep 13 00:50:46.637209 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:50:46.725218 sshd[9242]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:46.726857 systemd[1]: sshd@15-139.178.94.199:22-139.178.89.65:44716.service: Deactivated successfully. Sep 13 00:50:46.727846 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:50:46.728622 systemd-logind[1799]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:50:46.729210 systemd-logind[1799]: Removed session 18. Sep 13 00:50:51.755095 systemd[1]: Started sshd@16-139.178.94.199:22-139.178.89.65:43766.service - OpenSSH per-connection server daemon (139.178.89.65:43766). Sep 13 00:50:51.790093 sshd[9268]: Accepted publickey for core from 139.178.89.65 port 43766 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:50:51.791040 sshd[9268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:50:51.794075 systemd-logind[1799]: New session 19 of user core. Sep 13 00:50:51.807170 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:50:51.898217 sshd[9268]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:51.899799 systemd[1]: sshd@16-139.178.94.199:22-139.178.89.65:43766.service: Deactivated successfully. Sep 13 00:50:51.900752 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:50:51.901524 systemd-logind[1799]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:50:51.902044 systemd-logind[1799]: Removed session 19. Sep 13 00:50:56.921913 systemd[1]: Started sshd@17-139.178.94.199:22-139.178.89.65:43780.service - OpenSSH per-connection server daemon (139.178.89.65:43780). Sep 13 00:50:56.953462 sshd[9294]: Accepted publickey for core from 139.178.89.65 port 43780 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:50:56.954315 sshd[9294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:50:56.957541 systemd-logind[1799]: New session 20 of user core. Sep 13 00:50:56.975303 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:50:57.065196 sshd[9294]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:57.090260 systemd[1]: sshd@17-139.178.94.199:22-139.178.89.65:43780.service: Deactivated successfully. Sep 13 00:50:57.091006 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:50:57.091715 systemd-logind[1799]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:50:57.092362 systemd[1]: Started sshd@18-139.178.94.199:22-139.178.89.65:43790.service - OpenSSH per-connection server daemon (139.178.89.65:43790). Sep 13 00:50:57.092811 systemd-logind[1799]: Removed session 20. Sep 13 00:50:57.124333 sshd[9320]: Accepted publickey for core from 139.178.89.65 port 43790 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:50:57.125260 sshd[9320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:50:57.128673 systemd-logind[1799]: New session 21 of user core. Sep 13 00:50:57.143240 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:50:57.300662 sshd[9320]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:57.314296 systemd[1]: sshd@18-139.178.94.199:22-139.178.89.65:43790.service: Deactivated successfully. Sep 13 00:50:57.315450 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:50:57.316525 systemd-logind[1799]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:50:57.317582 systemd[1]: Started sshd@19-139.178.94.199:22-139.178.89.65:43796.service - OpenSSH per-connection server daemon (139.178.89.65:43796). Sep 13 00:50:57.318401 systemd-logind[1799]: Removed session 21. Sep 13 00:50:57.352260 sshd[9344]: Accepted publickey for core from 139.178.89.65 port 43796 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:50:57.355716 sshd[9344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:50:57.367363 systemd-logind[1799]: New session 22 of user core. Sep 13 00:50:57.387408 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:50:58.585931 sshd[9344]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:58.597932 systemd[1]: sshd@19-139.178.94.199:22-139.178.89.65:43796.service: Deactivated successfully. Sep 13 00:50:58.600529 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:50:58.602102 systemd-logind[1799]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:50:58.603541 systemd[1]: Started sshd@20-139.178.94.199:22-139.178.89.65:43806.service - OpenSSH per-connection server daemon (139.178.89.65:43806). Sep 13 00:50:58.604612 systemd-logind[1799]: Removed session 22. Sep 13 00:50:58.641587 sshd[9407]: Accepted publickey for core from 139.178.89.65 port 43806 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:50:58.642459 sshd[9407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:50:58.645101 systemd-logind[1799]: New session 23 of user core. Sep 13 00:50:58.653174 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:50:58.857607 sshd[9407]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:58.868044 systemd[1]: sshd@20-139.178.94.199:22-139.178.89.65:43806.service: Deactivated successfully. Sep 13 00:50:58.868996 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:50:58.869790 systemd-logind[1799]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:50:58.870590 systemd[1]: Started sshd@21-139.178.94.199:22-139.178.89.65:43812.service - OpenSSH per-connection server daemon (139.178.89.65:43812). Sep 13 00:50:58.871101 systemd-logind[1799]: Removed session 23. Sep 13 00:50:58.905508 sshd[9434]: Accepted publickey for core from 139.178.89.65 port 43812 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:50:58.908819 sshd[9434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:50:58.919352 systemd-logind[1799]: New session 24 of user core. Sep 13 00:50:58.931434 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:50:59.089567 sshd[9434]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:59.091652 systemd[1]: sshd@21-139.178.94.199:22-139.178.89.65:43812.service: Deactivated successfully. Sep 13 00:50:59.092682 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:50:59.093170 systemd-logind[1799]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:50:59.093891 systemd-logind[1799]: Removed session 24. Sep 13 00:51:04.117924 systemd[1]: Started sshd@22-139.178.94.199:22-139.178.89.65:42518.service - OpenSSH per-connection server daemon (139.178.89.65:42518). Sep 13 00:51:04.182647 sshd[9465]: Accepted publickey for core from 139.178.89.65 port 42518 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:51:04.184146 sshd[9465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:51:04.189004 systemd-logind[1799]: New session 25 of user core. Sep 13 00:51:04.204569 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:51:04.342605 sshd[9465]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:04.346178 systemd[1]: sshd@22-139.178.94.199:22-139.178.89.65:42518.service: Deactivated successfully. Sep 13 00:51:04.348479 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:51:04.350195 systemd-logind[1799]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:51:04.351555 systemd-logind[1799]: Removed session 25. Sep 13 00:51:09.378389 systemd[1]: Started sshd@23-139.178.94.199:22-139.178.89.65:42534.service - OpenSSH per-connection server daemon (139.178.89.65:42534). Sep 13 00:51:09.407527 sshd[9491]: Accepted publickey for core from 139.178.89.65 port 42534 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:51:09.408589 sshd[9491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:51:09.412088 systemd-logind[1799]: New session 26 of user core. Sep 13 00:51:09.424531 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:51:09.518823 sshd[9491]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:09.520367 systemd[1]: sshd@23-139.178.94.199:22-139.178.89.65:42534.service: Deactivated successfully. Sep 13 00:51:09.521348 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:51:09.522020 systemd-logind[1799]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:51:09.522616 systemd-logind[1799]: Removed session 26. Sep 13 00:51:14.558246 systemd[1]: Started sshd@24-139.178.94.199:22-139.178.89.65:50222.service - OpenSSH per-connection server daemon (139.178.89.65:50222). Sep 13 00:51:14.588458 sshd[9570]: Accepted publickey for core from 139.178.89.65 port 50222 ssh2: RSA SHA256:9yt090AVdPEq/FQCZmOXJ9hsscYfxbTJezbW0JfpgHU Sep 13 00:51:14.589458 sshd[9570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:51:14.593042 systemd-logind[1799]: New session 27 of user core. Sep 13 00:51:14.607258 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 00:51:14.696033 sshd[9570]: pam_unix(sshd:session): session closed for user core Sep 13 00:51:14.697564 systemd[1]: sshd@24-139.178.94.199:22-139.178.89.65:50222.service: Deactivated successfully. Sep 13 00:51:14.698543 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:51:14.699315 systemd-logind[1799]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:51:14.699906 systemd-logind[1799]: Removed session 27.