Jul 7 06:11:23.912566 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:11:23.912581 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:11:23.912588 kernel: BIOS-provided physical RAM map: Jul 7 06:11:23.912592 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jul 7 06:11:23.912596 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jul 7 06:11:23.912600 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jul 7 06:11:23.912604 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jul 7 06:11:23.912609 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jul 7 06:11:23.912613 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819cbfff] usable Jul 7 06:11:23.912618 kernel: BIOS-e820: [mem 0x00000000819cc000-0x00000000819ccfff] ACPI NVS Jul 7 06:11:23.912622 kernel: BIOS-e820: [mem 0x00000000819cd000-0x00000000819cdfff] reserved Jul 7 06:11:23.912626 kernel: BIOS-e820: [mem 0x00000000819ce000-0x000000008afccfff] usable Jul 7 06:11:23.912630 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jul 7 06:11:23.912634 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jul 7 06:11:23.912640 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jul 7 06:11:23.912648 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jul 7 06:11:23.912653 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jul 7 06:11:23.912657 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jul 7 06:11:23.912662 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 06:11:23.912681 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jul 7 06:11:23.912686 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jul 7 06:11:23.912690 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 7 06:11:23.912694 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jul 7 06:11:23.912699 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jul 7 06:11:23.912703 kernel: NX (Execute Disable) protection: active Jul 7 06:11:23.912708 kernel: APIC: Static calls initialized Jul 7 06:11:23.912713 kernel: SMBIOS 3.2.1 present. Jul 7 06:11:23.912718 kernel: DMI: Supermicro SYS-5019C-MR/X11SCM-F, BIOS 1.9 09/16/2022 Jul 7 06:11:23.912722 kernel: DMI: Memory slots populated: 1/4 Jul 7 06:11:23.912727 kernel: tsc: Detected 3400.000 MHz processor Jul 7 06:11:23.912731 kernel: tsc: Detected 3399.906 MHz TSC Jul 7 06:11:23.912736 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:11:23.912741 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:11:23.912745 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jul 7 06:11:23.912750 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jul 7 06:11:23.912755 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:11:23.912760 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jul 7 06:11:23.912765 kernel: Using GB pages for direct mapping Jul 7 06:11:23.912769 kernel: ACPI: Early table checksum verification disabled Jul 7 06:11:23.912774 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jul 7 06:11:23.912781 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jul 7 06:11:23.912786 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jul 7 06:11:23.912791 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jul 7 06:11:23.912796 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jul 7 06:11:23.912801 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jul 7 06:11:23.912806 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jul 7 06:11:23.912811 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jul 7 06:11:23.912816 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jul 7 06:11:23.912821 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jul 7 06:11:23.912826 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jul 7 06:11:23.912832 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jul 7 06:11:23.912837 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jul 7 06:11:23.912841 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 06:11:23.912846 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jul 7 06:11:23.912851 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jul 7 06:11:23.912856 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 06:11:23.912861 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 06:11:23.912866 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jul 7 06:11:23.912871 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jul 7 06:11:23.912876 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 7 06:11:23.912881 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jul 7 06:11:23.912886 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jul 7 06:11:23.912891 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jul 7 06:11:23.912896 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jul 7 06:11:23.912901 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jul 7 06:11:23.912906 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jul 7 06:11:23.912911 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jul 7 06:11:23.912916 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jul 7 06:11:23.912921 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jul 7 06:11:23.912926 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jul 7 06:11:23.912931 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jul 7 06:11:23.912936 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jul 7 06:11:23.912941 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jul 7 06:11:23.912946 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jul 7 06:11:23.912950 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jul 7 06:11:23.912955 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jul 7 06:11:23.912961 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jul 7 06:11:23.912966 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jul 7 06:11:23.912971 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jul 7 06:11:23.912975 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jul 7 06:11:23.912980 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jul 7 06:11:23.912985 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jul 7 06:11:23.912990 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jul 7 06:11:23.912995 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jul 7 06:11:23.912999 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jul 7 06:11:23.913005 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jul 7 06:11:23.913010 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jul 7 06:11:23.913014 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jul 7 06:11:23.913019 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jul 7 06:11:23.913024 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jul 7 06:11:23.913029 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jul 7 06:11:23.913034 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jul 7 06:11:23.913038 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jul 7 06:11:23.913043 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jul 7 06:11:23.913049 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jul 7 06:11:23.913054 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jul 7 06:11:23.913058 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jul 7 06:11:23.913063 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jul 7 06:11:23.913068 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jul 7 06:11:23.913073 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jul 7 06:11:23.913078 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jul 7 06:11:23.913082 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jul 7 06:11:23.913087 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jul 7 06:11:23.913093 kernel: No NUMA configuration found Jul 7 06:11:23.913098 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jul 7 06:11:23.913103 kernel: NODE_DATA(0) allocated [mem 0x86eff8dc0-0x86effffff] Jul 7 06:11:23.913108 kernel: Zone ranges: Jul 7 06:11:23.913112 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:11:23.913117 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 06:11:23.913122 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jul 7 06:11:23.913127 kernel: Device empty Jul 7 06:11:23.913132 kernel: Movable zone start for each node Jul 7 06:11:23.913137 kernel: Early memory node ranges Jul 7 06:11:23.913142 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jul 7 06:11:23.913147 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jul 7 06:11:23.913152 kernel: node 0: [mem 0x0000000040400000-0x00000000819cbfff] Jul 7 06:11:23.913157 kernel: node 0: [mem 0x00000000819ce000-0x000000008afccfff] Jul 7 06:11:23.913162 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jul 7 06:11:23.913170 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jul 7 06:11:23.913175 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jul 7 06:11:23.913180 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jul 7 06:11:23.913186 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:11:23.913192 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jul 7 06:11:23.913197 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 7 06:11:23.913202 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jul 7 06:11:23.913207 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jul 7 06:11:23.913213 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jul 7 06:11:23.913218 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jul 7 06:11:23.913223 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jul 7 06:11:23.913229 kernel: ACPI: PM-Timer IO Port: 0x1808 Jul 7 06:11:23.913235 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 7 06:11:23.913240 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 7 06:11:23.913245 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 7 06:11:23.913250 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 7 06:11:23.913255 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 7 06:11:23.913260 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 7 06:11:23.913265 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 7 06:11:23.913271 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 7 06:11:23.913276 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 7 06:11:23.913282 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 7 06:11:23.913287 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 7 06:11:23.913292 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 7 06:11:23.913297 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 7 06:11:23.913302 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 7 06:11:23.913307 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 7 06:11:23.913312 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 7 06:11:23.913317 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jul 7 06:11:23.913323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 06:11:23.913328 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:11:23.913334 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:11:23.913339 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 06:11:23.913344 kernel: TSC deadline timer available Jul 7 06:11:23.913349 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:11:23.913354 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:11:23.913359 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:11:23.913364 kernel: CPU topo: Max. threads per core: 2 Jul 7 06:11:23.913370 kernel: CPU topo: Num. cores per package: 8 Jul 7 06:11:23.913375 kernel: CPU topo: Num. threads per package: 16 Jul 7 06:11:23.913381 kernel: CPU topo: Allowing 16 present CPUs plus 0 hotplug CPUs Jul 7 06:11:23.913386 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jul 7 06:11:23.913391 kernel: Booting paravirtualized kernel on bare hardware Jul 7 06:11:23.913396 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:11:23.913402 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jul 7 06:11:23.913407 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jul 7 06:11:23.913412 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jul 7 06:11:23.913417 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 7 06:11:23.913423 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:11:23.913429 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:11:23.913434 kernel: random: crng init done Jul 7 06:11:23.913439 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jul 7 06:11:23.913445 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jul 7 06:11:23.913450 kernel: Fallback order for Node 0: 0 Jul 7 06:11:23.913455 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8363245 Jul 7 06:11:23.913460 kernel: Policy zone: Normal Jul 7 06:11:23.913465 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:11:23.913471 kernel: software IO TLB: area num 16. Jul 7 06:11:23.913476 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 7 06:11:23.913481 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:11:23.913486 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:11:23.913492 kernel: Dynamic Preempt: voluntary Jul 7 06:11:23.913497 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:11:23.913502 kernel: rcu: RCU event tracing is enabled. Jul 7 06:11:23.913508 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 7 06:11:23.913513 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:11:23.913519 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:11:23.913524 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:11:23.913529 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:11:23.913534 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 7 06:11:23.913539 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 06:11:23.913545 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 06:11:23.913550 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 06:11:23.913555 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jul 7 06:11:23.913560 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:11:23.913566 kernel: Console: colour VGA+ 80x25 Jul 7 06:11:23.913572 kernel: printk: legacy console [tty0] enabled Jul 7 06:11:23.913577 kernel: printk: legacy console [ttyS1] enabled Jul 7 06:11:23.913582 kernel: ACPI: Core revision 20240827 Jul 7 06:11:23.913587 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jul 7 06:11:23.913592 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:11:23.913598 kernel: DMAR: Host address width 39 Jul 7 06:11:23.913603 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jul 7 06:11:23.913608 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jul 7 06:11:23.913613 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jul 7 06:11:23.913619 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jul 7 06:11:23.913624 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jul 7 06:11:23.913630 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jul 7 06:11:23.913635 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jul 7 06:11:23.913640 kernel: x2apic enabled Jul 7 06:11:23.913647 kernel: APIC: Switched APIC routing to: cluster x2apic Jul 7 06:11:23.913669 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jul 7 06:11:23.913690 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jul 7 06:11:23.913697 kernel: CPU0: Thermal monitoring enabled (TM1) Jul 7 06:11:23.913702 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 7 06:11:23.913707 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 7 06:11:23.913712 kernel: process: using mwait in idle threads Jul 7 06:11:23.913717 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:11:23.913722 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 7 06:11:23.913727 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 7 06:11:23.913732 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 7 06:11:23.913737 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 7 06:11:23.913743 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 06:11:23.913748 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 06:11:23.913754 kernel: TAA: Mitigation: TSX disabled Jul 7 06:11:23.913759 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jul 7 06:11:23.913764 kernel: SRBDS: Mitigation: Microcode Jul 7 06:11:23.913769 kernel: GDS: Vulnerable: No microcode Jul 7 06:11:23.913774 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 06:11:23.913779 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:11:23.913784 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:11:23.913789 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:11:23.913794 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 7 06:11:23.913799 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 7 06:11:23.913804 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:11:23.913810 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 7 06:11:23.913815 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 7 06:11:23.913820 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jul 7 06:11:23.913826 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:11:23.913831 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:11:23.913836 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:11:23.913841 kernel: landlock: Up and running. Jul 7 06:11:23.913846 kernel: SELinux: Initializing. Jul 7 06:11:23.913851 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:11:23.913856 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:11:23.913862 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 7 06:11:23.913867 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jul 7 06:11:23.913873 kernel: ... version: 4 Jul 7 06:11:23.913878 kernel: ... bit width: 48 Jul 7 06:11:23.913883 kernel: ... generic registers: 4 Jul 7 06:11:23.913888 kernel: ... value mask: 0000ffffffffffff Jul 7 06:11:23.913894 kernel: ... max period: 00007fffffffffff Jul 7 06:11:23.913899 kernel: ... fixed-purpose events: 3 Jul 7 06:11:23.913904 kernel: ... event mask: 000000070000000f Jul 7 06:11:23.913909 kernel: signal: max sigframe size: 2032 Jul 7 06:11:23.913914 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jul 7 06:11:23.913920 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:11:23.913925 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:11:23.913931 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jul 7 06:11:23.913936 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jul 7 06:11:23.913941 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:11:23.913946 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:11:23.913951 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jul 7 06:11:23.913957 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 7 06:11:23.913963 kernel: smp: Brought up 1 node, 16 CPUs Jul 7 06:11:23.913968 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jul 7 06:11:23.913973 kernel: Memory: 32695432K/33452980K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 732528K reserved, 0K cma-reserved) Jul 7 06:11:23.913979 kernel: devtmpfs: initialized Jul 7 06:11:23.913984 kernel: x86/mm: Memory block size: 128MB Jul 7 06:11:23.913989 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819cc000-0x819ccfff] (4096 bytes) Jul 7 06:11:23.913994 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jul 7 06:11:23.913999 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:11:23.914004 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 7 06:11:23.914011 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:11:23.914016 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:11:23.914021 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:11:23.914026 kernel: audit: type=2000 audit(1751868675.041:1): state=initialized audit_enabled=0 res=1 Jul 7 06:11:23.914031 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:11:23.914036 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:11:23.914042 kernel: cpuidle: using governor menu Jul 7 06:11:23.914047 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:11:23.914052 kernel: dca service started, version 1.12.1 Jul 7 06:11:23.914058 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 7 06:11:23.914064 kernel: PCI: Using configuration type 1 for base access Jul 7 06:11:23.914069 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:11:23.914074 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:11:23.914079 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:11:23.914084 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:11:23.914089 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:11:23.914095 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:11:23.914100 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:11:23.914106 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:11:23.914111 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jul 7 06:11:23.914116 kernel: ACPI: Dynamic OEM Table Load: Jul 7 06:11:23.914121 kernel: ACPI: SSDT 0xFFFF9CA182092C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jul 7 06:11:23.914126 kernel: ACPI: Dynamic OEM Table Load: Jul 7 06:11:23.914132 kernel: ACPI: SSDT 0xFFFF9CA1820AC800 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jul 7 06:11:23.914137 kernel: ACPI: Dynamic OEM Table Load: Jul 7 06:11:23.914142 kernel: ACPI: SSDT 0xFFFF9CA180249600 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jul 7 06:11:23.914147 kernel: ACPI: Dynamic OEM Table Load: Jul 7 06:11:23.914152 kernel: ACPI: SSDT 0xFFFF9CA1820A8000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jul 7 06:11:23.914158 kernel: ACPI: Dynamic OEM Table Load: Jul 7 06:11:23.914163 kernel: ACPI: SSDT 0xFFFF9CA1801A0000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jul 7 06:11:23.914168 kernel: ACPI: Dynamic OEM Table Load: Jul 7 06:11:23.914173 kernel: ACPI: SSDT 0xFFFF9CA182095400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jul 7 06:11:23.914178 kernel: ACPI: Interpreter enabled Jul 7 06:11:23.914183 kernel: ACPI: PM: (supports S0 S5) Jul 7 06:11:23.914189 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:11:23.914194 kernel: HEST: Enabling Firmware First mode for corrected errors. Jul 7 06:11:23.914199 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jul 7 06:11:23.914205 kernel: HEST: Table parsing has been initialized. Jul 7 06:11:23.914210 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jul 7 06:11:23.914215 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:11:23.914221 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 06:11:23.914226 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jul 7 06:11:23.914231 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jul 7 06:11:23.914236 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jul 7 06:11:23.914242 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jul 7 06:11:23.914247 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jul 7 06:11:23.914253 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jul 7 06:11:23.914258 kernel: ACPI: \_TZ_.FN00: New power resource Jul 7 06:11:23.914263 kernel: ACPI: \_TZ_.FN01: New power resource Jul 7 06:11:23.914268 kernel: ACPI: \_TZ_.FN02: New power resource Jul 7 06:11:23.914274 kernel: ACPI: \_TZ_.FN03: New power resource Jul 7 06:11:23.914279 kernel: ACPI: \_TZ_.FN04: New power resource Jul 7 06:11:23.914284 kernel: ACPI: \PIN_: New power resource Jul 7 06:11:23.914289 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jul 7 06:11:23.914364 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:11:23.914416 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jul 7 06:11:23.914463 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jul 7 06:11:23.914470 kernel: PCI host bridge to bus 0000:00 Jul 7 06:11:23.914518 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 06:11:23.914560 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 06:11:23.914601 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 06:11:23.914646 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jul 7 06:11:23.914723 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jul 7 06:11:23.914763 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jul 7 06:11:23.914818 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 conventional PCI endpoint Jul 7 06:11:23.914877 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 PCIe Root Port Jul 7 06:11:23.914926 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 06:11:23.914976 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 7 06:11:23.915023 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 7 06:11:23.915072 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jul 7 06:11:23.915123 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 conventional PCI endpoint Jul 7 06:11:23.915170 kernel: pci 0000:00:08.0: BAR 0 [mem 0x9551f000-0x9551ffff 64bit] Jul 7 06:11:23.915221 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 conventional PCI endpoint Jul 7 06:11:23.915267 kernel: pci 0000:00:12.0: BAR 0 [mem 0x9551e000-0x9551efff 64bit] Jul 7 06:11:23.915321 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 conventional PCI endpoint Jul 7 06:11:23.915369 kernel: pci 0000:00:14.0: BAR 0 [mem 0x95500000-0x9550ffff 64bit] Jul 7 06:11:23.915414 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jul 7 06:11:23.915464 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 conventional PCI endpoint Jul 7 06:11:23.915511 kernel: pci 0000:00:14.2: BAR 0 [mem 0x95512000-0x95513fff 64bit] Jul 7 06:11:23.915557 kernel: pci 0000:00:14.2: BAR 2 [mem 0x9551d000-0x9551dfff 64bit] Jul 7 06:11:23.915609 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 conventional PCI endpoint Jul 7 06:11:23.915699 kernel: pci 0000:00:15.0: BAR 0 [mem 0x00000000-0x00000fff 64bit] Jul 7 06:11:23.915753 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 conventional PCI endpoint Jul 7 06:11:23.915799 kernel: pci 0000:00:15.1: BAR 0 [mem 0x00000000-0x00000fff 64bit] Jul 7 06:11:23.915853 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 conventional PCI endpoint Jul 7 06:11:23.915899 kernel: pci 0000:00:16.0: BAR 0 [mem 0x9551a000-0x9551afff 64bit] Jul 7 06:11:23.915948 kernel: pci 0000:00:16.0: PME# supported from D3hot Jul 7 06:11:23.915998 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 conventional PCI endpoint Jul 7 06:11:23.916045 kernel: pci 0000:00:16.1: BAR 0 [mem 0x95519000-0x95519fff 64bit] Jul 7 06:11:23.916091 kernel: pci 0000:00:16.1: PME# supported from D3hot Jul 7 06:11:23.916142 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 conventional PCI endpoint Jul 7 06:11:23.916188 kernel: pci 0000:00:16.4: BAR 0 [mem 0x95518000-0x95518fff 64bit] Jul 7 06:11:23.916237 kernel: pci 0000:00:16.4: PME# supported from D3hot Jul 7 06:11:23.916286 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 conventional PCI endpoint Jul 7 06:11:23.916333 kernel: pci 0000:00:17.0: BAR 0 [mem 0x95510000-0x95511fff] Jul 7 06:11:23.916379 kernel: pci 0000:00:17.0: BAR 1 [mem 0x95517000-0x955170ff] Jul 7 06:11:23.916425 kernel: pci 0000:00:17.0: BAR 2 [io 0x6050-0x6057] Jul 7 06:11:23.916473 kernel: pci 0000:00:17.0: BAR 3 [io 0x6040-0x6043] Jul 7 06:11:23.916519 kernel: pci 0000:00:17.0: BAR 4 [io 0x6020-0x603f] Jul 7 06:11:23.916565 kernel: pci 0000:00:17.0: BAR 5 [mem 0x95516000-0x955167ff] Jul 7 06:11:23.916611 kernel: pci 0000:00:17.0: PME# supported from D3hot Jul 7 06:11:23.916686 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 PCIe Root Port Jul 7 06:11:23.916753 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 7 06:11:23.916800 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jul 7 06:11:23.916854 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 PCIe Root Port Jul 7 06:11:23.916902 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 7 06:11:23.916949 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 7 06:11:23.916996 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 7 06:11:23.917042 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jul 7 06:11:23.917096 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 PCIe Root Port Jul 7 06:11:23.917147 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 7 06:11:23.917195 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 7 06:11:23.917242 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 7 06:11:23.917290 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jul 7 06:11:23.917341 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 PCIe Root Port Jul 7 06:11:23.917390 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 7 06:11:23.917437 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jul 7 06:11:23.917489 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 PCIe Root Port Jul 7 06:11:23.917539 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 7 06:11:23.917585 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 7 06:11:23.917632 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 7 06:11:23.917690 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jul 7 06:11:23.917741 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 conventional PCI endpoint Jul 7 06:11:23.917790 kernel: pci 0000:00:1e.0: BAR 0 [mem 0x00000000-0x00000fff 64bit] Jul 7 06:11:23.917844 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 conventional PCI endpoint Jul 7 06:11:23.917896 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 06:11:23.917942 kernel: pci 0000:00:1f.4: BAR 0 [mem 0x95514000-0x955140ff 64bit] Jul 7 06:11:23.917988 kernel: pci 0000:00:1f.4: BAR 4 [io 0xefa0-0xefbf] Jul 7 06:11:23.918039 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 conventional PCI endpoint Jul 7 06:11:23.918085 kernel: pci 0000:00:1f.5: BAR 0 [mem 0xfe010000-0xfe010fff] Jul 7 06:11:23.918141 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 7 06:11:23.918192 kernel: pci 0000:01:00.0: BAR 0 [mem 0x92000000-0x93ffffff 64bit pref] Jul 7 06:11:23.918240 kernel: pci 0000:01:00.0: ROM [mem 0x95200000-0x952fffff pref] Jul 7 06:11:23.918289 kernel: pci 0000:01:00.0: PME# supported from D3cold Jul 7 06:11:23.918337 kernel: pci 0000:01:00.0: VF BAR 0 [mem 0x00000000-0x000fffff 64bit pref] Jul 7 06:11:23.918385 kernel: pci 0000:01:00.0: VF BAR 0 [mem 0x00000000-0x007fffff 64bit pref]: contains BAR 0 for 8 VFs Jul 7 06:11:23.918437 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 7 06:11:23.918487 kernel: pci 0000:01:00.1: BAR 0 [mem 0x90000000-0x91ffffff 64bit pref] Jul 7 06:11:23.918535 kernel: pci 0000:01:00.1: ROM [mem 0x95100000-0x951fffff pref] Jul 7 06:11:23.918583 kernel: pci 0000:01:00.1: PME# supported from D3cold Jul 7 06:11:23.918630 kernel: pci 0000:01:00.1: VF BAR 0 [mem 0x00000000-0x000fffff 64bit pref] Jul 7 06:11:23.918684 kernel: pci 0000:01:00.1: VF BAR 0 [mem 0x00000000-0x007fffff 64bit pref]: contains BAR 0 for 8 VFs Jul 7 06:11:23.918733 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 06:11:23.918780 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 7 06:11:23.918832 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jul 7 06:11:23.918883 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 PCIe Endpoint Jul 7 06:11:23.918931 kernel: pci 0000:03:00.0: BAR 0 [mem 0x95400000-0x9547ffff] Jul 7 06:11:23.918978 kernel: pci 0000:03:00.0: BAR 2 [io 0x5000-0x501f] Jul 7 06:11:23.919026 kernel: pci 0000:03:00.0: BAR 3 [mem 0x95480000-0x95483fff] Jul 7 06:11:23.919074 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 7 06:11:23.919121 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 7 06:11:23.919173 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jul 7 06:11:23.919224 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 PCIe Endpoint Jul 7 06:11:23.919273 kernel: pci 0000:04:00.0: BAR 0 [mem 0x95300000-0x9537ffff] Jul 7 06:11:23.919320 kernel: pci 0000:04:00.0: BAR 2 [io 0x4000-0x401f] Jul 7 06:11:23.919368 kernel: pci 0000:04:00.0: BAR 3 [mem 0x95380000-0x95383fff] Jul 7 06:11:23.919416 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jul 7 06:11:23.919463 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 7 06:11:23.919510 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 7 06:11:23.919565 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jul 7 06:11:23.919614 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 7 06:11:23.919664 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 7 06:11:23.919713 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 7 06:11:23.919760 kernel: pci 0000:06:00.0: enabling Extended Tags Jul 7 06:11:23.919807 kernel: pci 0000:06:00.0: supports D1 D2 Jul 7 06:11:23.919855 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 06:11:23.919904 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 7 06:11:23.919954 kernel: pci_bus 0000:07: extended config space not accessible Jul 7 06:11:23.920013 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 conventional PCI endpoint Jul 7 06:11:23.920064 kernel: pci 0000:07:00.0: BAR 0 [mem 0x94000000-0x94ffffff] Jul 7 06:11:23.920115 kernel: pci 0000:07:00.0: BAR 1 [mem 0x95000000-0x9501ffff] Jul 7 06:11:23.920164 kernel: pci 0000:07:00.0: BAR 2 [io 0x3000-0x307f] Jul 7 06:11:23.920214 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 06:11:23.920266 kernel: pci 0000:07:00.0: supports D1 D2 Jul 7 06:11:23.920315 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 7 06:11:23.920362 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 7 06:11:23.920370 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jul 7 06:11:23.920376 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jul 7 06:11:23.920381 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jul 7 06:11:23.920387 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jul 7 06:11:23.920392 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jul 7 06:11:23.920399 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jul 7 06:11:23.920405 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jul 7 06:11:23.920411 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jul 7 06:11:23.920416 kernel: iommu: Default domain type: Translated Jul 7 06:11:23.920421 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:11:23.920427 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:11:23.920433 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 06:11:23.920438 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jul 7 06:11:23.920443 kernel: e820: reserve RAM buffer [mem 0x819cc000-0x83ffffff] Jul 7 06:11:23.920450 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jul 7 06:11:23.920455 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jul 7 06:11:23.920461 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jul 7 06:11:23.920468 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jul 7 06:11:23.920574 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jul 7 06:11:23.920623 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jul 7 06:11:23.920676 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 06:11:23.920684 kernel: vgaarb: loaded Jul 7 06:11:23.920690 kernel: clocksource: Switched to clocksource tsc-early Jul 7 06:11:23.920697 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:11:23.920703 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:11:23.920708 kernel: pnp: PnP ACPI init Jul 7 06:11:23.920756 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jul 7 06:11:23.920803 kernel: pnp 00:02: [dma 0 disabled] Jul 7 06:11:23.920852 kernel: pnp 00:03: [dma 0 disabled] Jul 7 06:11:23.920934 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jul 7 06:11:23.921041 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jul 7 06:11:23.921088 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Jul 7 06:11:23.921146 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Jul 7 06:11:23.921199 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Jul 7 06:11:23.921305 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Jul 7 06:11:23.921366 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Jul 7 06:11:23.921410 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Jul 7 06:11:23.921489 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Jul 7 06:11:23.921532 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Jul 7 06:11:23.921584 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Jul 7 06:11:23.921630 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Jul 7 06:11:23.921710 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jul 7 06:11:23.921772 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Jul 7 06:11:23.921816 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Jul 7 06:11:23.921862 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Jul 7 06:11:23.921906 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Jul 7 06:11:23.921953 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Jul 7 06:11:23.921962 kernel: pnp: PnP ACPI: found 9 devices Jul 7 06:11:23.921968 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:11:23.921974 kernel: NET: Registered PF_INET protocol family Jul 7 06:11:23.921980 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:11:23.921987 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 7 06:11:23.921993 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:11:23.921999 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:11:23.922005 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 06:11:23.922011 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jul 7 06:11:23.922017 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 06:11:23.922023 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 06:11:23.922028 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:11:23.922034 kernel: NET: Registered PF_XDP protocol family Jul 7 06:11:23.922088 kernel: pci 0000:00:15.0: BAR 0 [mem 0x95515000-0x95515fff 64bit]: assigned Jul 7 06:11:23.922145 kernel: pci 0000:00:15.1: BAR 0 [mem 0x9551b000-0x9551bfff 64bit]: assigned Jul 7 06:11:23.922199 kernel: pci 0000:00:1e.0: BAR 0 [mem 0x9551c000-0x9551cfff 64bit]: assigned Jul 7 06:11:23.922256 kernel: pci 0000:01:00.0: VF BAR 0 [mem size 0x00800000 64bit pref]: can't assign; no space Jul 7 06:11:23.922312 kernel: pci 0000:01:00.0: VF BAR 0 [mem size 0x00800000 64bit pref]: failed to assign Jul 7 06:11:23.922374 kernel: pci 0000:01:00.1: VF BAR 0 [mem size 0x00800000 64bit pref]: can't assign; no space Jul 7 06:11:23.922429 kernel: pci 0000:01:00.1: VF BAR 0 [mem size 0x00800000 64bit pref]: failed to assign Jul 7 06:11:23.922479 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 7 06:11:23.922528 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 7 06:11:23.922576 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 7 06:11:23.922625 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 7 06:11:23.922677 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 7 06:11:23.922726 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 7 06:11:23.922778 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 7 06:11:23.922827 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 7 06:11:23.922875 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 7 06:11:23.922923 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 7 06:11:23.922978 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 7 06:11:23.923037 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 7 06:11:23.923088 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 7 06:11:23.923139 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 7 06:11:23.923188 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 7 06:11:23.923237 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 7 06:11:23.923289 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 7 06:11:23.923349 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 7 06:11:23.923391 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 06:11:23.923432 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 06:11:23.923474 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 06:11:23.923515 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jul 7 06:11:23.923557 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jul 7 06:11:23.923604 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jul 7 06:11:23.923711 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jul 7 06:11:23.923760 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jul 7 06:11:23.923804 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jul 7 06:11:23.923850 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 7 06:11:23.923893 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jul 7 06:11:23.923940 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jul 7 06:11:23.923986 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jul 7 06:11:23.924031 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jul 7 06:11:23.924075 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jul 7 06:11:23.924083 kernel: PCI: CLS 64 bytes, default 64 Jul 7 06:11:23.924089 kernel: DMAR: No ATSR found Jul 7 06:11:23.924095 kernel: DMAR: No SATC found Jul 7 06:11:23.924101 kernel: DMAR: dmar0: Using Queued invalidation Jul 7 06:11:23.924148 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jul 7 06:11:23.924198 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jul 7 06:11:23.924246 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jul 7 06:11:23.924293 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jul 7 06:11:23.924340 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jul 7 06:11:23.924388 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jul 7 06:11:23.924434 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jul 7 06:11:23.924480 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jul 7 06:11:23.924527 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jul 7 06:11:23.924576 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jul 7 06:11:23.924622 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jul 7 06:11:23.924695 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jul 7 06:11:23.924770 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jul 7 06:11:23.924817 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jul 7 06:11:23.924864 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jul 7 06:11:23.924911 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jul 7 06:11:23.924958 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jul 7 06:11:23.925006 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jul 7 06:11:23.925053 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jul 7 06:11:23.925099 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jul 7 06:11:23.925145 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jul 7 06:11:23.925193 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jul 7 06:11:23.925241 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jul 7 06:11:23.925289 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jul 7 06:11:23.925337 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jul 7 06:11:23.925386 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jul 7 06:11:23.925436 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jul 7 06:11:23.925444 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jul 7 06:11:23.925449 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 06:11:23.925455 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jul 7 06:11:23.925461 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jul 7 06:11:23.925466 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jul 7 06:11:23.925472 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jul 7 06:11:23.925478 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jul 7 06:11:23.925532 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jul 7 06:11:23.925541 kernel: Initialise system trusted keyrings Jul 7 06:11:23.925546 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jul 7 06:11:23.925552 kernel: Key type asymmetric registered Jul 7 06:11:23.925558 kernel: Asymmetric key parser 'x509' registered Jul 7 06:11:23.925563 kernel: tsc: Refined TSC clocksource calibration: 3408.000 MHz Jul 7 06:11:23.925569 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 7 06:11:23.925575 kernel: clocksource: Switched to clocksource tsc Jul 7 06:11:23.925582 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:11:23.925587 kernel: io scheduler mq-deadline registered Jul 7 06:11:23.925593 kernel: io scheduler kyber registered Jul 7 06:11:23.925598 kernel: io scheduler bfq registered Jul 7 06:11:23.925648 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jul 7 06:11:23.925749 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jul 7 06:11:23.925797 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jul 7 06:11:23.925844 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jul 7 06:11:23.925890 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jul 7 06:11:23.925940 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jul 7 06:11:23.925991 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jul 7 06:11:23.925999 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jul 7 06:11:23.926005 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jul 7 06:11:23.926010 kernel: pstore: Using crash dump compression: deflate Jul 7 06:11:23.926016 kernel: pstore: Registered erst as persistent store backend Jul 7 06:11:23.926022 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:11:23.926027 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:11:23.926034 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:11:23.926040 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 7 06:11:23.926045 kernel: hpet_acpi_add: no address or irqs in _CRS Jul 7 06:11:23.926092 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jul 7 06:11:23.926100 kernel: i8042: PNP: No PS/2 controller found. Jul 7 06:11:23.926142 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jul 7 06:11:23.926185 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jul 7 06:11:23.926227 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-07-07T06:11:22 UTC (1751868682) Jul 7 06:11:23.926273 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jul 7 06:11:23.926281 kernel: intel_pstate: Intel P-state driver initializing Jul 7 06:11:23.926286 kernel: intel_pstate: Disabling energy efficiency optimization Jul 7 06:11:23.926292 kernel: intel_pstate: HWP enabled Jul 7 06:11:23.926298 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:11:23.926303 kernel: Segment Routing with IPv6 Jul 7 06:11:23.926309 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:11:23.926314 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:11:23.926320 kernel: Key type dns_resolver registered Jul 7 06:11:23.926327 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jul 7 06:11:23.926332 kernel: microcode: Current revision: 0x000000f4 Jul 7 06:11:23.926338 kernel: IPI shorthand broadcast: enabled Jul 7 06:11:23.926343 kernel: sched_clock: Marking stable (3759351545, 1495570730)->(6845283499, -1590361224) Jul 7 06:11:23.926349 kernel: registered taskstats version 1 Jul 7 06:11:23.926354 kernel: Loading compiled-in X.509 certificates Jul 7 06:11:23.926360 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:11:23.926365 kernel: Demotion targets for Node 0: null Jul 7 06:11:23.926371 kernel: Key type .fscrypt registered Jul 7 06:11:23.926377 kernel: Key type fscrypt-provisioning registered Jul 7 06:11:23.926383 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:11:23.926388 kernel: ima: No architecture policies found Jul 7 06:11:23.926393 kernel: clk: Disabling unused clocks Jul 7 06:11:23.926399 kernel: Warning: unable to open an initial console. Jul 7 06:11:23.926404 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:11:23.926410 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:11:23.926415 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:11:23.926422 kernel: Run /init as init process Jul 7 06:11:23.926427 kernel: with arguments: Jul 7 06:11:23.926433 kernel: /init Jul 7 06:11:23.926438 kernel: with environment: Jul 7 06:11:23.926444 kernel: HOME=/ Jul 7 06:11:23.926449 kernel: TERM=linux Jul 7 06:11:23.926454 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:11:23.926461 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:11:23.926468 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:11:23.926476 systemd[1]: Detected architecture x86-64. Jul 7 06:11:23.926481 systemd[1]: Running in initrd. Jul 7 06:11:23.926487 systemd[1]: No hostname configured, using default hostname. Jul 7 06:11:23.926492 systemd[1]: Hostname set to . Jul 7 06:11:23.926498 systemd[1]: Initializing machine ID from random generator. Jul 7 06:11:23.926504 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:11:23.926510 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:11:23.926516 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:11:23.926523 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:11:23.926528 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:11:23.926534 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:11:23.926540 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:11:23.926547 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:11:23.926554 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:11:23.926560 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:11:23.926565 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:11:23.926571 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:11:23.926577 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:11:23.926583 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:11:23.926589 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:11:23.926594 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:11:23.926600 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:11:23.926607 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:11:23.926613 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:11:23.926619 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:11:23.926624 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:11:23.926630 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:11:23.926636 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:11:23.926642 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:11:23.926671 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:11:23.926677 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:11:23.926698 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:11:23.926719 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:11:23.926725 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:11:23.926731 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:11:23.926747 systemd-journald[298]: Collecting audit messages is disabled. Jul 7 06:11:23.926762 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:11:23.926769 systemd-journald[298]: Journal started Jul 7 06:11:23.926782 systemd-journald[298]: Runtime Journal (/run/log/journal/4abc109eaf6b4f1a9da83fdbeb99575b) is 8M, max 640.1M, 632.1M free. Jul 7 06:11:23.921319 systemd-modules-load[301]: Inserted module 'overlay' Jul 7 06:11:23.939174 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:11:23.975560 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:11:23.975572 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:11:23.975583 kernel: Bridge firewalling registered Jul 7 06:11:23.944767 systemd-modules-load[301]: Inserted module 'br_netfilter' Jul 7 06:11:23.975612 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:11:24.020955 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:11:24.027867 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:11:24.052074 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:11:24.066541 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:11:24.083761 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:11:24.120131 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:11:24.120680 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:11:24.124842 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:11:24.127364 systemd-tmpfiles[322]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:11:24.127556 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:11:24.128443 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:11:24.130150 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:11:24.131602 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:11:24.136970 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:11:24.147008 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:11:24.159045 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:11:24.169062 systemd-resolved[338]: Positive Trust Anchors: Jul 7 06:11:24.169071 systemd-resolved[338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:11:24.169106 systemd-resolved[338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:11:24.171484 systemd-resolved[338]: Defaulting to hostname 'linux'. Jul 7 06:11:24.178817 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:11:24.203912 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:11:24.308752 dracut-cmdline[344]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:11:24.381676 kernel: SCSI subsystem initialized Jul 7 06:11:24.394661 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:11:24.408707 kernel: iscsi: registered transport (tcp) Jul 7 06:11:24.431961 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:11:24.431979 kernel: QLogic iSCSI HBA Driver Jul 7 06:11:24.442353 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:11:24.475773 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:11:24.486961 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:11:24.585577 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:11:24.598447 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:11:24.719691 kernel: raid6: avx2x4 gen() 18580 MB/s Jul 7 06:11:24.740676 kernel: raid6: avx2x2 gen() 39906 MB/s Jul 7 06:11:24.766774 kernel: raid6: avx2x1 gen() 44934 MB/s Jul 7 06:11:24.766790 kernel: raid6: using algorithm avx2x1 gen() 44934 MB/s Jul 7 06:11:24.793853 kernel: raid6: .... xor() 24460 MB/s, rmw enabled Jul 7 06:11:24.793872 kernel: raid6: using avx2x2 recovery algorithm Jul 7 06:11:24.814690 kernel: xor: automatically using best checksumming function avx Jul 7 06:11:24.917697 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:11:24.921150 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:11:24.931014 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:11:24.975845 systemd-udevd[556]: Using default interface naming scheme 'v255'. Jul 7 06:11:24.979125 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:11:24.996351 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:11:25.049738 dracut-pre-trigger[568]: rd.md=0: removing MD RAID activation Jul 7 06:11:25.066632 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:11:25.067469 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:11:25.167429 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:11:25.189766 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:11:25.170076 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:11:25.227002 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 06:11:25.227020 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 06:11:25.227028 kernel: ACPI: bus type USB registered Jul 7 06:11:25.227035 kernel: usbcore: registered new interface driver usbfs Jul 7 06:11:25.227042 kernel: usbcore: registered new interface driver hub Jul 7 06:11:25.227649 kernel: usbcore: registered new device driver usb Jul 7 06:11:25.234649 kernel: AES CTR mode by8 optimization enabled Jul 7 06:11:25.234665 kernel: PTP clock support registered Jul 7 06:11:25.234673 kernel: libata version 3.00 loaded. Jul 7 06:11:25.256588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:11:25.262664 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 7 06:11:25.262792 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jul 7 06:11:25.256697 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:11:25.359634 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jul 7 06:11:25.359724 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 7 06:11:25.359804 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jul 7 06:11:25.359884 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jul 7 06:11:25.359946 kernel: hub 1-0:1.0: USB hub found Jul 7 06:11:25.360022 kernel: hub 1-0:1.0: 16 ports detected Jul 7 06:11:25.360089 kernel: hub 2-0:1.0: USB hub found Jul 7 06:11:25.360158 kernel: hub 2-0:1.0: 10 ports detected Jul 7 06:11:25.360222 kernel: ahci 0000:00:17.0: version 3.0 Jul 7 06:11:25.360289 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 7 06:11:25.360298 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 7 06:11:25.360305 kernel: ahci 0000:00:17.0: AHCI vers 0001.0301, 32 command slots, 6 Gbps, SATA mode Jul 7 06:11:25.360366 kernel: ahci 0000:00:17.0: 7/7 ports implemented (port mask 0x7f) Jul 7 06:11:25.360427 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jul 7 06:11:25.359673 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:11:25.522661 kernel: igb 0000:03:00.0: added PHC on eth0 Jul 7 06:11:25.522761 kernel: scsi host0: ahci Jul 7 06:11:25.522829 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 7 06:11:25.522896 kernel: scsi host1: ahci Jul 7 06:11:25.522959 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:44 Jul 7 06:11:25.523047 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jul 7 06:11:25.523153 kernel: scsi host2: ahci Jul 7 06:11:25.523240 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 7 06:11:25.523334 kernel: scsi host3: ahci Jul 7 06:11:25.523400 kernel: scsi host4: ahci Jul 7 06:11:25.523461 kernel: scsi host5: ahci Jul 7 06:11:25.523516 kernel: igb 0000:04:00.0: added PHC on eth1 Jul 7 06:11:25.523580 kernel: scsi host6: ahci Jul 7 06:11:25.523639 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 7 06:11:25.523709 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 135 lpm-pol 0 Jul 7 06:11:25.523718 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:45 Jul 7 06:11:25.523779 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 135 lpm-pol 0 Jul 7 06:11:25.523787 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jul 7 06:11:25.523848 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 135 lpm-pol 0 Jul 7 06:11:25.523857 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 7 06:11:25.523917 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 135 lpm-pol 0 Jul 7 06:11:25.523927 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jul 7 06:11:25.523988 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 135 lpm-pol 0 Jul 7 06:11:25.523996 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 135 lpm-pol 0 Jul 7 06:11:25.524003 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 135 lpm-pol 0 Jul 7 06:11:25.524010 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jul 7 06:11:25.419052 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:11:25.554773 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jul 7 06:11:25.545886 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:11:25.677068 kernel: mlx5_core 0000:01:00.0: PTM is not supported by PCIe Jul 7 06:11:25.677160 kernel: mlx5_core 0000:01:00.0: firmware version: 14.29.2002 Jul 7 06:11:25.686159 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 7 06:11:25.695996 kernel: hub 1-14:1.0: USB hub found Jul 7 06:11:25.696094 kernel: hub 1-14:1.0: 4 ports detected Jul 7 06:11:25.721694 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:11:25.833681 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 7 06:11:25.833733 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jul 7 06:11:25.838664 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 06:11:25.844686 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 06:11:25.850680 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 06:11:25.855699 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 7 06:11:25.862702 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 7 06:11:25.868680 kernel: ata1.00: Model 'Micron_5200_MTFDDAK480TDN', rev ' D1MU020', applying quirks: zeroaftertrim Jul 7 06:11:25.884688 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jul 7 06:11:25.885689 kernel: ata2.00: Model 'Micron_5200_MTFDDAK480TDN', rev ' D1MU020', applying quirks: zeroaftertrim Jul 7 06:11:25.901853 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jul 7 06:11:25.916391 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 7 06:11:25.916407 kernel: ata1.00: Features: NCQ-prio Jul 7 06:11:25.916685 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 7 06:11:25.927694 kernel: ata2.00: Features: NCQ-prio Jul 7 06:11:25.927710 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 7 06:11:25.937667 kernel: ata1.00: configured for UDMA/133 Jul 7 06:11:25.941680 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jul 7 06:11:25.941803 kernel: ata2.00: configured for UDMA/133 Jul 7 06:11:25.941811 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jul 7 06:11:25.962700 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jul 7 06:11:25.979915 kernel: ata2.00: Enabling discard_zeroes_data Jul 7 06:11:25.979934 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jul 7 06:11:25.979953 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 06:11:25.979960 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 7 06:11:25.980036 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jul 7 06:11:26.005475 kernel: sd 1:0:0:0: [sda] Write Protect is off Jul 7 06:11:26.005554 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 7 06:11:26.017787 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jul 7 06:11:26.017914 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Jul 7 06:11:26.023331 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 06:11:26.023410 kernel: sd 0:0:0:0: [sdb] Write Protect is off Jul 7 06:11:26.032369 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jul 7 06:11:26.037179 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jul 7 06:11:26.037258 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 7 06:11:26.059269 kernel: ata2.00: Enabling discard_zeroes_data Jul 7 06:11:26.059285 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jul 7 06:11:26.070988 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 06:11:26.077705 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jul 7 06:11:26.099655 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:11:26.099673 kernel: GPT:9289727 != 937703087 Jul 7 06:11:26.105925 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:11:26.109782 kernel: GPT:9289727 != 937703087 Jul 7 06:11:26.115191 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:11:26.120424 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jul 7 06:11:26.125748 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Jul 7 06:11:26.132647 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 06:11:26.145276 kernel: usbcore: registered new interface driver usbhid Jul 7 06:11:26.145296 kernel: usbhid: USB HID core driver Jul 7 06:11:26.159723 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jul 7 06:11:26.161676 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jul 7 06:11:26.222769 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 7 06:11:26.222863 kernel: mlx5_core 0000:01:00.1: PTM is not supported by PCIe Jul 7 06:11:26.222938 kernel: mlx5_core 0000:01:00.1: firmware version: 14.29.2002 Jul 7 06:11:26.223005 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 7 06:11:26.223068 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jul 7 06:11:26.208192 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. Jul 7 06:11:26.267749 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jul 7 06:11:26.267763 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jul 7 06:11:26.262344 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Jul 7 06:11:26.277724 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Jul 7 06:11:26.284153 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Jul 7 06:11:26.306874 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:11:26.353895 disk-uuid[785]: Primary Header is updated. Jul 7 06:11:26.353895 disk-uuid[785]: Secondary Entries is updated. Jul 7 06:11:26.353895 disk-uuid[785]: Secondary Header is updated. Jul 7 06:11:26.382672 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 06:11:26.382686 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jul 7 06:11:26.472710 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 7 06:11:26.484880 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jul 7 06:11:26.727680 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 7 06:11:26.741725 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jul 7 06:11:26.742198 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jul 7 06:11:26.759275 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:11:26.769067 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:11:26.797764 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:11:26.807877 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:11:26.828023 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:11:26.894456 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:11:27.383149 kernel: ata1.00: Enabling discard_zeroes_data Jul 7 06:11:27.396695 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Jul 7 06:11:27.396716 disk-uuid[786]: The operation has completed successfully. Jul 7 06:11:27.429618 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:11:27.429702 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:11:27.473888 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:11:27.499449 sh[833]: Success Jul 7 06:11:27.526521 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:11:27.526543 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:11:27.535772 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:11:27.548702 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 7 06:11:27.589954 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:11:27.610247 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:11:27.637969 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:11:27.700758 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:11:27.700771 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (845) Jul 7 06:11:27.700778 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:11:27.700785 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:11:27.700792 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:11:27.706879 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:11:27.714881 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:11:27.738819 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:11:27.739293 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:11:27.756316 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:11:27.806652 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sdb6 (8:22) scanned by mount (868) Jul 7 06:11:27.823982 kernel: BTRFS info (device sdb6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:11:27.824000 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:11:27.829884 kernel: BTRFS info (device sdb6): using free-space-tree Jul 7 06:11:27.843327 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:11:27.864881 kernel: BTRFS info (device sdb6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:11:27.854879 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:11:27.882714 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:11:27.892923 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:11:27.933831 systemd-networkd[1015]: lo: Link UP Jul 7 06:11:27.933834 systemd-networkd[1015]: lo: Gained carrier Jul 7 06:11:27.936264 systemd-networkd[1015]: Enumeration completed Jul 7 06:11:27.936303 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:11:27.936836 systemd-networkd[1015]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:11:27.953730 systemd[1]: Reached target network.target - Network. Jul 7 06:11:27.964576 systemd-networkd[1015]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:11:27.993519 ignition[1014]: Ignition 2.21.0 Jul 7 06:11:27.991519 systemd-networkd[1015]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:11:27.993524 ignition[1014]: Stage: fetch-offline Jul 7 06:11:27.995479 unknown[1014]: fetched base config from "system" Jul 7 06:11:27.993542 ignition[1014]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:11:27.995482 unknown[1014]: fetched user config from "system" Jul 7 06:11:27.993547 ignition[1014]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 06:11:27.996727 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:11:27.993593 ignition[1014]: parsed url from cmdline: "" Jul 7 06:11:28.013827 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 06:11:27.993595 ignition[1014]: no config URL provided Jul 7 06:11:28.014333 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:11:27.993598 ignition[1014]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:11:27.993621 ignition[1014]: parsing config with SHA512: 5a0cddc46f76d302aca287ac9fc80b6adaf58adf43f292dfb0343c21e4185c101d3f3a599b93c002d58b30aa69da3c519c1b7b9ac94a24305b2fc38d87f04f0e Jul 7 06:11:27.995668 ignition[1014]: fetch-offline: fetch-offline passed Jul 7 06:11:28.137881 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 7 06:11:28.136381 systemd-networkd[1015]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:11:27.995671 ignition[1014]: POST message to Packet Timeline Jul 7 06:11:27.995673 ignition[1014]: POST Status error: resource requires networking Jul 7 06:11:27.995702 ignition[1014]: Ignition finished successfully Jul 7 06:11:28.062171 ignition[1030]: Ignition 2.21.0 Jul 7 06:11:28.062175 ignition[1030]: Stage: kargs Jul 7 06:11:28.062256 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:11:28.062261 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 06:11:28.063396 ignition[1030]: kargs: kargs passed Jul 7 06:11:28.063399 ignition[1030]: POST message to Packet Timeline Jul 7 06:11:28.063412 ignition[1030]: GET https://metadata.packet.net/metadata: attempt #1 Jul 7 06:11:28.063862 ignition[1030]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:40786->[::1]:53: read: connection refused Jul 7 06:11:28.263947 ignition[1030]: GET https://metadata.packet.net/metadata: attempt #2 Jul 7 06:11:28.264346 ignition[1030]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36331->[::1]:53: read: connection refused Jul 7 06:11:28.317716 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 7 06:11:28.320119 systemd-networkd[1015]: eno1: Link UP Jul 7 06:11:28.320519 systemd-networkd[1015]: eno2: Link UP Jul 7 06:11:28.320910 systemd-networkd[1015]: enp1s0f0np0: Link UP Jul 7 06:11:28.321326 systemd-networkd[1015]: enp1s0f0np0: Gained carrier Jul 7 06:11:28.335164 systemd-networkd[1015]: enp1s0f1np1: Link UP Jul 7 06:11:28.336474 systemd-networkd[1015]: enp1s0f1np1: Gained carrier Jul 7 06:11:28.374837 systemd-networkd[1015]: enp1s0f0np0: DHCPv4 address 145.40.90.175/31, gateway 145.40.90.174 acquired from 145.40.83.140 Jul 7 06:11:28.664900 ignition[1030]: GET https://metadata.packet.net/metadata: attempt #3 Jul 7 06:11:28.666086 ignition[1030]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50035->[::1]:53: read: connection refused Jul 7 06:11:29.467344 ignition[1030]: GET https://metadata.packet.net/metadata: attempt #4 Jul 7 06:11:29.468641 ignition[1030]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49674->[::1]:53: read: connection refused Jul 7 06:11:29.581304 systemd-networkd[1015]: enp1s0f1np1: Gained IPv6LL Jul 7 06:11:29.645158 systemd-networkd[1015]: enp1s0f0np0: Gained IPv6LL Jul 7 06:11:31.069781 ignition[1030]: GET https://metadata.packet.net/metadata: attempt #5 Jul 7 06:11:31.070969 ignition[1030]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59117->[::1]:53: read: connection refused Jul 7 06:11:34.273493 ignition[1030]: GET https://metadata.packet.net/metadata: attempt #6 Jul 7 06:11:35.389117 ignition[1030]: GET result: OK Jul 7 06:11:35.777433 ignition[1030]: Ignition finished successfully Jul 7 06:11:35.783591 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:11:35.794455 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:11:35.836070 ignition[1049]: Ignition 2.21.0 Jul 7 06:11:35.836075 ignition[1049]: Stage: disks Jul 7 06:11:35.836166 ignition[1049]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:11:35.836171 ignition[1049]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 06:11:35.837235 ignition[1049]: disks: disks passed Jul 7 06:11:35.837238 ignition[1049]: POST message to Packet Timeline Jul 7 06:11:35.837252 ignition[1049]: GET https://metadata.packet.net/metadata: attempt #1 Jul 7 06:11:36.907947 ignition[1049]: GET result: OK Jul 7 06:11:37.284777 ignition[1049]: Ignition finished successfully Jul 7 06:11:37.287861 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:11:37.302735 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:11:37.320902 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:11:37.340882 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:11:37.357890 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:11:37.374792 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:11:37.393491 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:11:37.441496 systemd-fsck[1067]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 06:11:37.450191 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:11:37.464085 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:11:37.566579 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:11:37.579876 kernel: EXT4-fs (sdb9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:11:37.574054 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:11:37.591535 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:11:37.601301 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:11:37.618007 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 06:11:37.646132 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 (8:22) scanned by mount (1076) Jul 7 06:11:37.646150 kernel: BTRFS info (device sdb6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:11:37.655298 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:11:37.662300 kernel: BTRFS info (device sdb6): using free-space-tree Jul 7 06:11:37.677256 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jul 7 06:11:37.686988 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:11:37.687006 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:11:37.706752 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:11:37.732947 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:11:37.746845 coreos-metadata[1078]: Jul 07 06:11:37.742 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 06:11:37.761681 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:11:37.781865 coreos-metadata[1094]: Jul 07 06:11:37.746 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 06:11:37.828433 initrd-setup-root[1108]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:11:37.836763 initrd-setup-root[1115]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:11:37.846715 initrd-setup-root[1122]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:11:37.856708 initrd-setup-root[1129]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:11:37.887811 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:11:37.898635 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:11:37.907475 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:11:37.940540 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:11:37.957693 kernel: BTRFS info (device sdb6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:11:37.958614 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:11:37.972803 ignition[1197]: INFO : Ignition 2.21.0 Jul 7 06:11:37.972803 ignition[1197]: INFO : Stage: mount Jul 7 06:11:37.972803 ignition[1197]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:11:37.972803 ignition[1197]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 06:11:37.972803 ignition[1197]: INFO : mount: mount passed Jul 7 06:11:37.972803 ignition[1197]: INFO : POST message to Packet Timeline Jul 7 06:11:37.972803 ignition[1197]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 06:11:38.701209 coreos-metadata[1078]: Jul 07 06:11:38.701 INFO Fetch successful Jul 7 06:11:38.778148 coreos-metadata[1094]: Jul 07 06:11:38.778 INFO Fetch successful Jul 7 06:11:38.794749 coreos-metadata[1078]: Jul 07 06:11:38.794 INFO wrote hostname ci-4372.0.1-a-2cf65e3e62 to /sysroot/etc/hostname Jul 7 06:11:38.796151 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 06:11:38.818094 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 7 06:11:38.818137 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jul 7 06:11:39.001340 ignition[1197]: INFO : GET result: OK Jul 7 06:11:39.321712 ignition[1197]: INFO : Ignition finished successfully Jul 7 06:11:39.326005 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:11:39.342285 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:11:39.385498 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:11:39.427649 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 (8:22) scanned by mount (1223) Jul 7 06:11:39.445155 kernel: BTRFS info (device sdb6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:11:39.445171 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:11:39.451058 kernel: BTRFS info (device sdb6): using free-space-tree Jul 7 06:11:39.455312 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:11:39.488846 ignition[1240]: INFO : Ignition 2.21.0 Jul 7 06:11:39.488846 ignition[1240]: INFO : Stage: files Jul 7 06:11:39.500900 ignition[1240]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:11:39.500900 ignition[1240]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 06:11:39.500900 ignition[1240]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:11:39.500900 ignition[1240]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:11:39.500900 ignition[1240]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:11:39.500900 ignition[1240]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:11:39.500900 ignition[1240]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:11:39.500900 ignition[1240]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:11:39.500900 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 06:11:39.500900 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 7 06:11:39.492354 unknown[1240]: wrote ssh authorized keys file for user: core Jul 7 06:11:39.622832 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:11:39.712635 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 06:11:39.729688 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:11:39.729688 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:11:39.729688 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:11:39.729688 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:11:39.729688 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:11:39.729688 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:11:39.729688 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:11:39.729688 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:11:39.729688 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:11:39.729688 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:11:39.729688 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:11:39.729688 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:11:39.729688 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:11:39.729688 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 7 06:11:40.449683 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:11:40.683572 ignition[1240]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 06:11:40.683572 ignition[1240]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:11:40.710867 ignition[1240]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:11:40.710867 ignition[1240]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:11:40.710867 ignition[1240]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:11:40.710867 ignition[1240]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:11:40.710867 ignition[1240]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:11:40.710867 ignition[1240]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:11:40.710867 ignition[1240]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:11:40.710867 ignition[1240]: INFO : files: files passed Jul 7 06:11:40.710867 ignition[1240]: INFO : POST message to Packet Timeline Jul 7 06:11:40.710867 ignition[1240]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 06:11:41.741339 ignition[1240]: INFO : GET result: OK Jul 7 06:11:42.099703 ignition[1240]: INFO : Ignition finished successfully Jul 7 06:11:42.103272 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:11:42.117660 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:11:42.126449 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:11:42.162359 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:11:42.162594 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:11:42.192264 initrd-setup-root-after-ignition[1282]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:11:42.192264 initrd-setup-root-after-ignition[1282]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:11:42.228888 initrd-setup-root-after-ignition[1286]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:11:42.196639 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:11:42.206844 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:11:42.239939 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:11:42.336807 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:11:42.336866 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:11:42.355134 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:11:42.375843 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:11:42.395144 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:11:42.397676 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:11:42.479670 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:11:42.493803 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:11:42.533259 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:11:42.542957 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:11:42.562218 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:11:42.581330 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:11:42.581756 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:11:42.616036 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:11:42.625335 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:11:42.642292 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:11:42.658243 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:11:42.677338 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:11:42.696319 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:11:42.715341 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:11:42.724498 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:11:42.750385 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:11:42.769381 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:11:42.787344 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:11:42.804245 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:11:42.804680 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:11:42.829382 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:11:42.847274 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:11:42.867130 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:11:42.867541 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:11:42.888216 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:11:42.888612 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:11:42.926047 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:11:42.926457 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:11:42.946529 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:11:42.963125 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:11:42.963550 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:11:42.983411 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:11:42.990502 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:11:43.015321 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:11:43.015612 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:11:43.037275 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:11:43.037552 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:11:43.054475 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:11:43.054913 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:11:43.071326 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:11:43.071717 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:11:43.181696 ignition[1306]: INFO : Ignition 2.21.0 Jul 7 06:11:43.181696 ignition[1306]: INFO : Stage: umount Jul 7 06:11:43.181696 ignition[1306]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:11:43.181696 ignition[1306]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 7 06:11:43.181696 ignition[1306]: INFO : umount: umount passed Jul 7 06:11:43.181696 ignition[1306]: INFO : POST message to Packet Timeline Jul 7 06:11:43.181696 ignition[1306]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 7 06:11:43.087431 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 06:11:43.087861 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 06:11:43.106910 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:11:43.118862 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:11:43.119012 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:11:43.158515 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:11:43.171885 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:11:43.171970 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:11:43.191959 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:11:43.192025 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:11:43.235718 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:11:43.236700 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:11:43.236796 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:11:43.259845 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:11:43.260089 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:11:44.285363 ignition[1306]: INFO : GET result: OK Jul 7 06:11:44.625546 ignition[1306]: INFO : Ignition finished successfully Jul 7 06:11:44.629949 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:11:44.630265 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:11:44.643874 systemd[1]: Stopped target network.target - Network. Jul 7 06:11:44.657034 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:11:44.657219 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:11:44.673992 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:11:44.674139 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:11:44.691003 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:11:44.691189 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:11:44.706980 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:11:44.707124 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:11:44.726017 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:11:44.726209 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:11:44.745364 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:11:44.762105 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:11:44.781794 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:11:44.782080 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:11:44.804024 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:11:44.804612 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:11:44.804939 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:11:44.821362 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:11:44.823618 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:11:44.838000 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:11:44.838127 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:11:44.861073 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:11:44.868809 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:11:44.868837 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:11:44.904864 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:11:44.904918 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:11:44.913186 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:11:44.913251 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:11:44.939031 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:11:44.939185 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:11:44.959368 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:11:44.981039 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:11:44.981222 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:11:44.982291 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:11:44.982616 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:11:45.001045 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:11:45.001173 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:11:45.008205 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:11:45.008311 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:11:45.034983 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:11:45.035120 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:11:45.061150 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:11:45.061287 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:11:45.097846 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:11:45.098090 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:11:45.135852 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:11:45.143786 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:11:45.394843 systemd-journald[298]: Received SIGTERM from PID 1 (systemd). Jul 7 06:11:45.143814 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:11:45.182008 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:11:45.182056 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:11:45.202854 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:11:45.202880 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:11:45.233354 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 06:11:45.233382 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 06:11:45.233405 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:11:45.233612 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:11:45.233657 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:11:45.260030 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:11:45.260089 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:11:45.269338 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:11:45.285678 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:11:45.328168 systemd[1]: Switching root. Jul 7 06:11:45.522813 systemd-journald[298]: Journal stopped Jul 7 06:11:47.180668 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:11:47.180684 kernel: SELinux: policy capability open_perms=1 Jul 7 06:11:47.180692 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:11:47.180698 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:11:47.180703 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:11:47.180708 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:11:47.180714 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:11:47.180720 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:11:47.180725 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:11:47.180731 kernel: audit: type=1403 audit(1751868705.634:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:11:47.180738 systemd[1]: Successfully loaded SELinux policy in 79.619ms. Jul 7 06:11:47.180745 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.830ms. Jul 7 06:11:47.180752 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:11:47.180758 systemd[1]: Detected architecture x86-64. Jul 7 06:11:47.180768 systemd[1]: Detected first boot. Jul 7 06:11:47.180774 systemd[1]: Hostname set to . Jul 7 06:11:47.180781 systemd[1]: Initializing machine ID from random generator. Jul 7 06:11:47.180787 zram_generator::config[1362]: No configuration found. Jul 7 06:11:47.180794 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:11:47.180801 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:11:47.180808 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:11:47.180815 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:11:47.180821 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:11:47.180827 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:11:47.180834 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:11:47.180840 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:11:47.180847 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:11:47.180854 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:11:47.180861 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:11:47.180867 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:11:47.180874 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:11:47.180880 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:11:47.180887 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:11:47.180893 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:11:47.180900 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:11:47.180906 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:11:47.180914 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:11:47.180921 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jul 7 06:11:47.180927 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:11:47.180934 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:11:47.180942 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:11:47.180949 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:11:47.180955 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:11:47.180963 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:11:47.180970 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:11:47.180976 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:11:47.180983 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:11:47.180989 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:11:47.180996 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:11:47.181002 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:11:47.181009 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:11:47.181017 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:11:47.181024 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:11:47.181031 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:11:47.181037 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:11:47.181044 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:11:47.181053 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:11:47.181060 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:11:47.181067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:11:47.181074 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:11:47.181080 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:11:47.181087 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:11:47.181094 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:11:47.181101 systemd[1]: Reached target machines.target - Containers. Jul 7 06:11:47.181109 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:11:47.181116 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:11:47.181123 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:11:47.181129 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:11:47.181136 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:11:47.181143 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:11:47.181150 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:11:47.181156 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:11:47.181163 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:11:47.181171 kernel: ACPI: bus type drm_connector registered Jul 7 06:11:47.181177 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:11:47.181184 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:11:47.181191 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:11:47.181198 kernel: loop: module loaded Jul 7 06:11:47.181204 kernel: fuse: init (API version 7.41) Jul 7 06:11:47.181210 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:11:47.181217 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:11:47.181225 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:11:47.181232 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:11:47.181239 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:11:47.181245 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:11:47.181262 systemd-journald[1466]: Collecting audit messages is disabled. Jul 7 06:11:47.181280 systemd-journald[1466]: Journal started Jul 7 06:11:47.181294 systemd-journald[1466]: Runtime Journal (/run/log/journal/32b119ff14d24bbea2f16040cff0337d) is 8M, max 640.1M, 632.1M free. Jul 7 06:11:46.050268 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:11:46.064431 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Jul 7 06:11:46.064714 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:11:47.203708 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:11:47.225680 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:11:47.244658 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:11:47.265808 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:11:47.265836 systemd[1]: Stopped verity-setup.service. Jul 7 06:11:47.289691 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:11:47.297684 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:11:47.306101 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:11:47.315834 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:11:47.325931 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:11:47.334926 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:11:47.343895 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:11:47.352889 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:11:47.361858 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:11:47.372083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:11:47.382023 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:11:47.382177 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:11:47.392078 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:11:47.392272 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:11:47.403320 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:11:47.403588 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:11:47.412521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:11:47.412981 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:11:47.424554 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:11:47.425075 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:11:47.434598 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:11:47.435100 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:11:47.445691 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:11:47.456625 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:11:47.468677 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:11:47.479885 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:11:47.490633 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:11:47.509572 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:11:47.519605 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:11:47.537972 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:11:47.546844 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:11:47.546877 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:11:47.558127 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:11:47.569853 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:11:47.580632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:11:47.596320 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:11:47.615900 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:11:47.626795 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:11:47.627576 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:11:47.629752 systemd-journald[1466]: Time spent on flushing to /var/log/journal/32b119ff14d24bbea2f16040cff0337d is 12.403ms for 1382 entries. Jul 7 06:11:47.629752 systemd-journald[1466]: System Journal (/var/log/journal/32b119ff14d24bbea2f16040cff0337d) is 8M, max 195.6M, 187.6M free. Jul 7 06:11:47.652634 systemd-journald[1466]: Received client request to flush runtime journal. Jul 7 06:11:47.644763 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:11:47.653921 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:11:47.670191 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:11:47.692204 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:11:47.702684 kernel: loop0: detected capacity change from 0 to 229808 Jul 7 06:11:47.707853 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:11:47.717790 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:11:47.727688 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:11:47.734474 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:11:47.744908 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:11:47.754880 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:11:47.766685 kernel: loop1: detected capacity change from 0 to 146240 Jul 7 06:11:47.770894 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:11:47.781076 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:11:47.792386 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:11:47.811910 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:11:47.827690 kernel: loop2: detected capacity change from 0 to 113872 Jul 7 06:11:47.837898 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:11:47.840744 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:11:47.854597 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Jul 7 06:11:47.854607 systemd-tmpfiles[1517]: ACLs are not supported, ignoring. Jul 7 06:11:47.857031 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:11:47.893654 kernel: loop3: detected capacity change from 0 to 8 Jul 7 06:11:47.919652 kernel: loop4: detected capacity change from 0 to 229808 Jul 7 06:11:47.941649 kernel: loop5: detected capacity change from 0 to 146240 Jul 7 06:11:47.967653 kernel: loop6: detected capacity change from 0 to 113872 Jul 7 06:11:47.977501 ldconfig[1496]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:11:47.978903 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:11:47.982690 kernel: loop7: detected capacity change from 0 to 8 Jul 7 06:11:47.983337 (sd-merge)[1523]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jul 7 06:11:47.983581 (sd-merge)[1523]: Merged extensions into '/usr'. Jul 7 06:11:47.994307 systemd[1]: Reload requested from client PID 1502 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:11:47.994315 systemd[1]: Reloading... Jul 7 06:11:48.020726 zram_generator::config[1549]: No configuration found. Jul 7 06:11:48.079187 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:11:48.139749 systemd[1]: Reloading finished in 145 ms. Jul 7 06:11:48.161373 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:11:48.172032 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:11:48.196487 systemd[1]: Starting ensure-sysext.service... Jul 7 06:11:48.203572 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:11:48.214550 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:11:48.222189 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:11:48.222211 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:11:48.222376 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:11:48.222552 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:11:48.223113 systemd-tmpfiles[1606]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:11:48.223309 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Jul 7 06:11:48.223350 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Jul 7 06:11:48.225849 systemd-tmpfiles[1606]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:11:48.225854 systemd-tmpfiles[1606]: Skipping /boot Jul 7 06:11:48.229363 systemd[1]: Reload requested from client PID 1605 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:11:48.229374 systemd[1]: Reloading... Jul 7 06:11:48.232433 systemd-tmpfiles[1606]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:11:48.232439 systemd-tmpfiles[1606]: Skipping /boot Jul 7 06:11:48.243620 systemd-udevd[1607]: Using default interface naming scheme 'v255'. Jul 7 06:11:48.261705 zram_generator::config[1634]: No configuration found. Jul 7 06:11:48.305657 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jul 7 06:11:48.305728 kernel: ACPI: button: Sleep Button [SLPB] Jul 7 06:11:48.317418 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 06:11:48.318658 kernel: IPMI message handler: version 39.2 Jul 7 06:11:48.327312 kernel: ACPI: button: Power Button [PWRF] Jul 7 06:11:48.335655 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:11:48.335707 kernel: ipmi device interface Jul 7 06:11:48.338654 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jul 7 06:11:48.338925 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jul 7 06:11:48.359950 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:11:48.365208 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jul 7 06:11:48.365346 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jul 7 06:11:48.386660 kernel: ipmi_si: IPMI System Interface driver Jul 7 06:11:48.386723 kernel: iTCO_vendor_support: vendor-support=0 Jul 7 06:11:48.386753 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jul 7 06:11:48.400512 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jul 7 06:11:48.406698 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jul 7 06:11:48.412978 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jul 7 06:11:48.421258 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jul 7 06:11:48.438129 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jul 7 06:11:48.438282 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jul 7 06:11:48.448378 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jul 7 06:11:48.455652 kernel: MACsec IEEE 802.1AE Jul 7 06:11:48.463351 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jul 7 06:11:48.463531 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jul 7 06:11:48.473920 systemd[1]: Reloading finished in 244 ms. Jul 7 06:11:48.498675 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jul 7 06:11:48.506771 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jul 7 06:11:48.509641 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:11:48.539674 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jul 7 06:11:48.552158 kernel: intel_rapl_common: Found RAPL domain package Jul 7 06:11:48.552193 kernel: intel_rapl_common: Found RAPL domain core Jul 7 06:11:48.557461 kernel: intel_rapl_common: Found RAPL domain dram Jul 7 06:11:48.559772 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:11:48.580650 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jul 7 06:11:48.590781 systemd[1]: Finished ensure-sysext.service. Jul 7 06:11:48.615390 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jul 7 06:11:48.623750 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:11:48.624410 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:11:48.637182 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:11:48.646827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:11:48.647455 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:11:48.657717 augenrules[1830]: No rules Jul 7 06:11:48.662662 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jul 7 06:11:48.669649 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 7 06:11:48.670781 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:11:48.687078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:11:48.697247 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:11:48.705805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:11:48.712087 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:11:48.721728 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:11:48.722322 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:11:48.733602 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:11:48.734519 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:11:48.735388 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:11:48.759423 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:11:48.782956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:11:48.792765 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:11:48.794098 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:11:48.794365 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:11:48.806486 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:11:48.818234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:11:48.818555 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:11:48.819083 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:11:48.819391 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:11:48.819793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:11:48.819910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:11:48.820111 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:11:48.820226 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:11:48.820432 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:11:48.820649 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:11:48.824560 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:11:48.824628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:11:48.825343 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:11:48.826224 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:11:48.826250 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:11:48.826444 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:11:48.845394 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:11:48.863059 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:11:48.896253 systemd-resolved[1841]: Positive Trust Anchors: Jul 7 06:11:48.896260 systemd-resolved[1841]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:11:48.896283 systemd-resolved[1841]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:11:48.898879 systemd-resolved[1841]: Using system hostname 'ci-4372.0.1-a-2cf65e3e62'. Jul 7 06:11:48.902097 systemd-networkd[1840]: lo: Link UP Jul 7 06:11:48.902100 systemd-networkd[1840]: lo: Gained carrier Jul 7 06:11:48.904543 systemd-networkd[1840]: bond0: netdev ready Jul 7 06:11:48.905530 systemd-networkd[1840]: Enumeration completed Jul 7 06:11:48.907289 systemd-networkd[1840]: enp1s0f0np0: Configuring with /etc/systemd/network/10-b8:59:9f:de:85:2c.network. Jul 7 06:11:48.914911 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:11:48.924955 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:11:48.933763 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:11:48.942925 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:11:48.954431 systemd[1]: Reached target network.target - Network. Jul 7 06:11:48.961759 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:11:48.971781 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:11:48.980852 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:11:48.990782 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:11:49.000762 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:11:49.010780 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:11:49.020765 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:11:49.020801 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:11:49.027767 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:11:49.036919 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:11:49.045875 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:11:49.055764 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:11:49.065964 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:11:49.075570 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:11:49.083895 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:11:49.094753 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:11:49.103934 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:11:49.114406 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:11:49.125343 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:11:49.136000 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:11:49.145234 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:11:49.153761 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:11:49.160787 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:11:49.160804 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:11:49.161418 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:11:49.170655 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 06:11:49.180562 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:11:49.188519 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:11:49.202260 coreos-metadata[1880]: Jul 07 06:11:49.202 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 06:11:49.203520 coreos-metadata[1880]: Jul 07 06:11:49.203 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 7 06:11:49.208956 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:11:49.231025 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:11:49.235839 jq[1885]: false Jul 7 06:11:49.239695 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 7 06:11:49.252521 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:11:49.253677 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jul 7 06:11:49.254641 systemd-networkd[1840]: enp1s0f1np1: Configuring with /etc/systemd/network/10-b8:59:9f:de:85:2d.network. Jul 7 06:11:49.272393 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:11:49.283340 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:11:49.289514 extend-filesystems[1887]: Found /dev/sdb6 Jul 7 06:11:49.294296 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:11:49.294562 google_oslogin_nss_cache[1888]: oslogin_cache_refresh[1888]: Refreshing passwd entry cache Jul 7 06:11:49.294575 oslogin_cache_refresh[1888]: Refreshing passwd entry cache Jul 7 06:11:49.295013 extend-filesystems[1887]: Found /dev/sdb9 Jul 7 06:11:49.318805 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Jul 7 06:11:49.318251 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:11:49.318935 google_oslogin_nss_cache[1888]: oslogin_cache_refresh[1888]: Failure getting users, quitting Jul 7 06:11:49.318935 google_oslogin_nss_cache[1888]: oslogin_cache_refresh[1888]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:11:49.318935 google_oslogin_nss_cache[1888]: oslogin_cache_refresh[1888]: Refreshing group entry cache Jul 7 06:11:49.318935 google_oslogin_nss_cache[1888]: oslogin_cache_refresh[1888]: Failure getting groups, quitting Jul 7 06:11:49.318935 google_oslogin_nss_cache[1888]: oslogin_cache_refresh[1888]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:11:49.319056 extend-filesystems[1887]: Checking size of /dev/sdb9 Jul 7 06:11:49.319056 extend-filesystems[1887]: Resized partition /dev/sdb9 Jul 7 06:11:49.296825 oslogin_cache_refresh[1888]: Failure getting users, quitting Jul 7 06:11:49.319624 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:11:49.344279 extend-filesystems[1900]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 06:11:49.296839 oslogin_cache_refresh[1888]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:11:49.330368 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:11:49.296879 oslogin_cache_refresh[1888]: Refreshing group entry cache Jul 7 06:11:49.297471 oslogin_cache_refresh[1888]: Failure getting groups, quitting Jul 7 06:11:49.297480 oslogin_cache_refresh[1888]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:11:49.368236 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jul 7 06:11:49.376004 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:11:49.376369 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:11:49.384289 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:11:49.390991 update_engine[1918]: I20250707 06:11:49.390952 1918 main.cc:92] Flatcar Update Engine starting Jul 7 06:11:49.393921 systemd-logind[1913]: Watching system buttons on /dev/input/event3 (Power Button) Jul 7 06:11:49.393932 systemd-logind[1913]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 7 06:11:49.393942 systemd-logind[1913]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jul 7 06:11:49.398396 systemd-logind[1913]: New seat seat0. Jul 7 06:11:49.398658 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 7 06:11:49.400321 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:11:49.410321 systemd-networkd[1840]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 7 06:11:49.410648 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jul 7 06:11:49.411408 systemd-networkd[1840]: enp1s0f0np0: Link UP Jul 7 06:11:49.411543 systemd-networkd[1840]: enp1s0f0np0: Gained carrier Jul 7 06:11:49.422650 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 7 06:11:49.425338 jq[1919]: true Jul 7 06:11:49.425967 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:11:49.429829 systemd-networkd[1840]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:59:9f:de:85:2c.network. Jul 7 06:11:49.429991 systemd-networkd[1840]: enp1s0f1np1: Link UP Jul 7 06:11:49.430149 systemd-networkd[1840]: enp1s0f1np1: Gained carrier Jul 7 06:11:49.436305 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:11:49.445932 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:11:49.446086 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:11:49.446237 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:11:49.446805 systemd-networkd[1840]: bond0: Link UP Jul 7 06:11:49.446946 systemd-networkd[1840]: bond0: Gained carrier Jul 7 06:11:49.447036 systemd-timesyncd[1842]: Network configuration changed, trying to establish connection. Jul 7 06:11:49.447306 systemd-timesyncd[1842]: Network configuration changed, trying to establish connection. Jul 7 06:11:49.447441 systemd-timesyncd[1842]: Network configuration changed, trying to establish connection. Jul 7 06:11:49.447515 systemd-timesyncd[1842]: Network configuration changed, trying to establish connection. Jul 7 06:11:49.454767 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:11:49.463970 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:11:49.464081 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:11:49.473251 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:11:49.473361 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:11:49.484312 sshd_keygen[1916]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:11:49.493624 jq[1922]: true Jul 7 06:11:49.494093 (ntainerd)[1923]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:11:49.498706 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:11:49.515137 tar[1921]: linux-amd64/LICENSE Jul 7 06:11:49.515542 tar[1921]: linux-amd64/helm Jul 7 06:11:49.523897 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jul 7 06:11:49.524017 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jul 7 06:11:49.527702 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:11:49.533641 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Jul 7 06:11:49.533679 kernel: bond0: active interface up! Jul 7 06:11:49.540014 bash[1957]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:11:49.543190 dbus-daemon[1881]: [system] SELinux support is enabled Jul 7 06:11:49.544952 update_engine[1918]: I20250707 06:11:49.544901 1918 update_check_scheduler.cc:74] Next update check in 10m26s Jul 7 06:11:49.553844 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:11:49.563825 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:11:49.574442 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:11:49.574611 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:11:49.583766 dbus-daemon[1881]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 06:11:49.584078 systemd[1]: Starting sshkeys.service... Jul 7 06:11:49.589710 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:11:49.589734 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:11:49.604480 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:11:49.612746 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:11:49.612763 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:11:49.623014 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:11:49.636675 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:11:49.648652 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Jul 7 06:11:49.655437 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 06:11:49.666426 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 06:11:49.696065 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:11:49.696317 containerd[1923]: time="2025-07-07T06:11:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:11:49.697056 containerd[1923]: time="2025-07-07T06:11:49.697040180Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:11:49.702081 containerd[1923]: time="2025-07-07T06:11:49.702064883Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.892µs" Jul 7 06:11:49.702081 containerd[1923]: time="2025-07-07T06:11:49.702080184Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:11:49.702126 containerd[1923]: time="2025-07-07T06:11:49.702092619Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:11:49.702180 containerd[1923]: time="2025-07-07T06:11:49.702172834Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:11:49.702197 containerd[1923]: time="2025-07-07T06:11:49.702182775Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:11:49.702210 containerd[1923]: time="2025-07-07T06:11:49.702195794Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:11:49.702232 containerd[1923]: time="2025-07-07T06:11:49.702226346Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:11:49.702249 containerd[1923]: time="2025-07-07T06:11:49.702233693Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:11:49.702371 containerd[1923]: time="2025-07-07T06:11:49.702360319Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:11:49.702386 containerd[1923]: time="2025-07-07T06:11:49.702371037Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:11:49.702386 containerd[1923]: time="2025-07-07T06:11:49.702377349Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:11:49.702386 containerd[1923]: time="2025-07-07T06:11:49.702381557Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:11:49.702429 containerd[1923]: time="2025-07-07T06:11:49.702423054Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:11:49.702543 containerd[1923]: time="2025-07-07T06:11:49.702536050Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:11:49.702559 containerd[1923]: time="2025-07-07T06:11:49.702552299Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:11:49.702576 containerd[1923]: time="2025-07-07T06:11:49.702558459Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:11:49.702592 containerd[1923]: time="2025-07-07T06:11:49.702578693Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:11:49.702728 containerd[1923]: time="2025-07-07T06:11:49.702720838Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:11:49.702757 containerd[1923]: time="2025-07-07T06:11:49.702751323Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:11:49.704787 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jul 7 06:11:49.706893 coreos-metadata[1982]: Jul 07 06:11:49.706 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 7 06:11:49.713855 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:11:49.714088 containerd[1923]: time="2025-07-07T06:11:49.714069453Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:11:49.714116 containerd[1923]: time="2025-07-07T06:11:49.714100561Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:11:49.714116 containerd[1923]: time="2025-07-07T06:11:49.714109981Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:11:49.714144 containerd[1923]: time="2025-07-07T06:11:49.714117444Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:11:49.714144 containerd[1923]: time="2025-07-07T06:11:49.714124030Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:11:49.714144 containerd[1923]: time="2025-07-07T06:11:49.714130024Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:11:49.714144 containerd[1923]: time="2025-07-07T06:11:49.714140478Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:11:49.714222 containerd[1923]: time="2025-07-07T06:11:49.714147600Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:11:49.714222 containerd[1923]: time="2025-07-07T06:11:49.714153995Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:11:49.714222 containerd[1923]: time="2025-07-07T06:11:49.714159620Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:11:49.714222 containerd[1923]: time="2025-07-07T06:11:49.714164940Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:11:49.714222 containerd[1923]: time="2025-07-07T06:11:49.714172749Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:11:49.714288 containerd[1923]: time="2025-07-07T06:11:49.714247846Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:11:49.714288 containerd[1923]: time="2025-07-07T06:11:49.714262784Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:11:49.714371 containerd[1923]: time="2025-07-07T06:11:49.714357481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:11:49.714428 containerd[1923]: time="2025-07-07T06:11:49.714416750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:11:49.714571 containerd[1923]: time="2025-07-07T06:11:49.714433557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:11:49.714596 containerd[1923]: time="2025-07-07T06:11:49.714577983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:11:49.714596 containerd[1923]: time="2025-07-07T06:11:49.714589491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:11:49.714655 containerd[1923]: time="2025-07-07T06:11:49.714598518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:11:49.714655 containerd[1923]: time="2025-07-07T06:11:49.714608774Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:11:49.714655 containerd[1923]: time="2025-07-07T06:11:49.714618346Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:11:49.714716 containerd[1923]: time="2025-07-07T06:11:49.714655224Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:11:49.714862 containerd[1923]: time="2025-07-07T06:11:49.714848407Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:11:49.714892 containerd[1923]: time="2025-07-07T06:11:49.714866372Z" level=info msg="Start snapshots syncer" Jul 7 06:11:49.714892 containerd[1923]: time="2025-07-07T06:11:49.714887822Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:11:49.715063 containerd[1923]: time="2025-07-07T06:11:49.715040120Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:11:49.715148 containerd[1923]: time="2025-07-07T06:11:49.715077030Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:11:49.715512 containerd[1923]: time="2025-07-07T06:11:49.715499582Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:11:49.715589 containerd[1923]: time="2025-07-07T06:11:49.715578017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:11:49.715619 containerd[1923]: time="2025-07-07T06:11:49.715595829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:11:49.715619 containerd[1923]: time="2025-07-07T06:11:49.715608392Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:11:49.715671 containerd[1923]: time="2025-07-07T06:11:49.715618364Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:11:49.715671 containerd[1923]: time="2025-07-07T06:11:49.715634147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:11:49.715671 containerd[1923]: time="2025-07-07T06:11:49.715651664Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:11:49.715671 containerd[1923]: time="2025-07-07T06:11:49.715667167Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:11:49.715770 containerd[1923]: time="2025-07-07T06:11:49.715688681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:11:49.715770 containerd[1923]: time="2025-07-07T06:11:49.715700317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:11:49.715770 containerd[1923]: time="2025-07-07T06:11:49.715711279Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:11:49.715770 containerd[1923]: time="2025-07-07T06:11:49.715735859Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:11:49.715770 containerd[1923]: time="2025-07-07T06:11:49.715749396Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:11:49.715770 containerd[1923]: time="2025-07-07T06:11:49.715758013Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:11:49.715770 containerd[1923]: time="2025-07-07T06:11:49.715766938Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:11:49.715932 containerd[1923]: time="2025-07-07T06:11:49.715774251Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:11:49.715932 containerd[1923]: time="2025-07-07T06:11:49.715783022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:11:49.715932 containerd[1923]: time="2025-07-07T06:11:49.715793562Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:11:49.715932 containerd[1923]: time="2025-07-07T06:11:49.715807079Z" level=info msg="runtime interface created" Jul 7 06:11:49.715932 containerd[1923]: time="2025-07-07T06:11:49.715812262Z" level=info msg="created NRI interface" Jul 7 06:11:49.715932 containerd[1923]: time="2025-07-07T06:11:49.715819388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:11:49.715932 containerd[1923]: time="2025-07-07T06:11:49.715828703Z" level=info msg="Connect containerd service" Jul 7 06:11:49.715932 containerd[1923]: time="2025-07-07T06:11:49.715849178Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:11:49.716311 containerd[1923]: time="2025-07-07T06:11:49.716298204Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:11:49.723632 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:11:49.783996 locksmithd[1998]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:11:49.824885 tar[1921]: linux-amd64/README.md Jul 7 06:11:49.839810 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:11:49.849306 containerd[1923]: time="2025-07-07T06:11:49.849278628Z" level=info msg="Start subscribing containerd event" Jul 7 06:11:49.849357 containerd[1923]: time="2025-07-07T06:11:49.849318163Z" level=info msg="Start recovering state" Jul 7 06:11:49.849357 containerd[1923]: time="2025-07-07T06:11:49.849339604Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:11:49.849413 containerd[1923]: time="2025-07-07T06:11:49.849373616Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:11:49.849413 containerd[1923]: time="2025-07-07T06:11:49.849393945Z" level=info msg="Start event monitor" Jul 7 06:11:49.849413 containerd[1923]: time="2025-07-07T06:11:49.849407780Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:11:49.849484 containerd[1923]: time="2025-07-07T06:11:49.849414191Z" level=info msg="Start streaming server" Jul 7 06:11:49.849484 containerd[1923]: time="2025-07-07T06:11:49.849421927Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:11:49.849484 containerd[1923]: time="2025-07-07T06:11:49.849428754Z" level=info msg="runtime interface starting up..." Jul 7 06:11:49.849484 containerd[1923]: time="2025-07-07T06:11:49.849433746Z" level=info msg="starting plugins..." Jul 7 06:11:49.849484 containerd[1923]: time="2025-07-07T06:11:49.849444297Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:11:49.849580 containerd[1923]: time="2025-07-07T06:11:49.849535336Z" level=info msg="containerd successfully booted in 0.153447s" Jul 7 06:11:49.849582 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:11:49.879696 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Jul 7 06:11:49.904939 extend-filesystems[1900]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Jul 7 06:11:49.904939 extend-filesystems[1900]: old_desc_blocks = 1, new_desc_blocks = 56 Jul 7 06:11:49.904939 extend-filesystems[1900]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Jul 7 06:11:49.942722 extend-filesystems[1887]: Resized filesystem in /dev/sdb9 Jul 7 06:11:49.905373 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:11:49.905504 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:11:50.203804 coreos-metadata[1880]: Jul 07 06:11:50.203 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 7 06:11:50.765755 systemd-networkd[1840]: bond0: Gained IPv6LL Jul 7 06:11:50.766064 systemd-timesyncd[1842]: Network configuration changed, trying to establish connection. Jul 7 06:11:50.957866 systemd-timesyncd[1842]: Network configuration changed, trying to establish connection. Jul 7 06:11:50.958045 systemd-timesyncd[1842]: Network configuration changed, trying to establish connection. Jul 7 06:11:50.959317 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:11:50.970080 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:11:50.980771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:11:50.995985 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:11:51.021756 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:11:51.687588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:11:51.698145 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:11:52.154327 kubelet[2042]: E0707 06:11:52.154206 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:11:52.155825 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:11:52.155913 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:11:52.156092 systemd[1]: kubelet.service: Consumed 604ms CPU time, 277.6M memory peak. Jul 7 06:11:52.258226 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Jul 7 06:11:52.258376 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Jul 7 06:11:53.763251 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:11:53.773619 systemd[1]: Started sshd@0-145.40.90.175:22-147.75.109.163:51794.service - OpenSSH per-connection server daemon (147.75.109.163:51794). Jul 7 06:11:53.836585 sshd[2060]: Accepted publickey for core from 147.75.109.163 port 51794 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:11:53.837290 sshd-session[2060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:53.844488 systemd-logind[1913]: New session 1 of user core. Jul 7 06:11:53.845317 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:11:53.854575 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:11:53.879941 coreos-metadata[1880]: Jul 07 06:11:53.879 INFO Fetch successful Jul 7 06:11:53.883634 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:11:53.895070 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:11:53.898103 coreos-metadata[1982]: Jul 07 06:11:53.898 INFO Fetch successful Jul 7 06:11:53.911145 (systemd)[2064]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:11:53.912804 systemd-logind[1913]: New session c1 of user core. Jul 7 06:11:53.918216 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 06:11:53.927964 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jul 7 06:11:53.947246 unknown[1982]: wrote ssh authorized keys file for user: core Jul 7 06:11:53.972619 update-ssh-keys[2077]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:11:53.972949 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 06:11:53.983398 systemd[1]: Finished sshkeys.service. Jul 7 06:11:54.029035 systemd[2064]: Queued start job for default target default.target. Jul 7 06:11:54.049296 systemd[2064]: Created slice app.slice - User Application Slice. Jul 7 06:11:54.049330 systemd[2064]: Reached target paths.target - Paths. Jul 7 06:11:54.049350 systemd[2064]: Reached target timers.target - Timers. Jul 7 06:11:54.050004 systemd[2064]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:11:54.055613 systemd[2064]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:11:54.055641 systemd[2064]: Reached target sockets.target - Sockets. Jul 7 06:11:54.055669 systemd[2064]: Reached target basic.target - Basic System. Jul 7 06:11:54.055690 systemd[2064]: Reached target default.target - Main User Target. Jul 7 06:11:54.055704 systemd[2064]: Startup finished in 139ms. Jul 7 06:11:54.055762 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:11:54.072879 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:11:54.138994 systemd[1]: Started sshd@1-145.40.90.175:22-147.75.109.163:51808.service - OpenSSH per-connection server daemon (147.75.109.163:51808). Jul 7 06:11:54.182905 sshd[2085]: Accepted publickey for core from 147.75.109.163 port 51808 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:11:54.183705 sshd-session[2085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:54.186713 systemd-logind[1913]: New session 2 of user core. Jul 7 06:11:54.202908 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:11:54.264628 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jul 7 06:11:54.275413 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:11:54.275882 sshd[2087]: Connection closed by 147.75.109.163 port 51808 Jul 7 06:11:54.276030 sshd-session[2085]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:54.283929 systemd[1]: Startup finished in 4.400s (kernel) + 22.339s (initrd) + 8.727s (userspace) = 35.467s. Jul 7 06:11:54.284213 systemd[1]: sshd@1-145.40.90.175:22-147.75.109.163:51808.service: Deactivated successfully. Jul 7 06:11:54.292873 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:11:54.293534 systemd-logind[1913]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:11:54.299766 systemd-logind[1913]: Removed session 2. Jul 7 06:11:54.300421 systemd[1]: Started sshd@2-145.40.90.175:22-147.75.109.163:51818.service - OpenSSH per-connection server daemon (147.75.109.163:51818). Jul 7 06:11:54.303760 login[1991]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 06:11:54.306467 systemd-logind[1913]: New session 3 of user core. Jul 7 06:11:54.307093 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:11:54.310685 login[1983]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 06:11:54.313284 systemd-logind[1913]: New session 4 of user core. Jul 7 06:11:54.313860 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:11:54.329945 sshd[2097]: Accepted publickey for core from 147.75.109.163 port 51818 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:11:54.330588 sshd-session[2097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:54.333207 systemd-logind[1913]: New session 5 of user core. Jul 7 06:11:54.333756 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:11:54.383463 sshd[2120]: Connection closed by 147.75.109.163 port 51818 Jul 7 06:11:54.383707 sshd-session[2097]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:54.386056 systemd[1]: sshd@2-145.40.90.175:22-147.75.109.163:51818.service: Deactivated successfully. Jul 7 06:11:54.387526 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:11:54.389177 systemd-logind[1913]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:11:54.390376 systemd-logind[1913]: Removed session 5. Jul 7 06:11:56.530610 systemd-timesyncd[1842]: Network configuration changed, trying to establish connection. Jul 7 06:12:02.408206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:12:02.411488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:12:02.733851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:12:02.756925 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:12:02.783413 kubelet[2133]: E0707 06:12:02.783359 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:12:02.786141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:12:02.786247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:12:02.786470 systemd[1]: kubelet.service: Consumed 193ms CPU time, 112.5M memory peak. Jul 7 06:12:04.401536 systemd[1]: Started sshd@3-145.40.90.175:22-147.75.109.163:56484.service - OpenSSH per-connection server daemon (147.75.109.163:56484). Jul 7 06:12:04.440038 sshd[2150]: Accepted publickey for core from 147.75.109.163 port 56484 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:12:04.443361 sshd-session[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:04.455716 systemd-logind[1913]: New session 6 of user core. Jul 7 06:12:04.475106 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:12:04.535138 sshd[2152]: Connection closed by 147.75.109.163 port 56484 Jul 7 06:12:04.535328 sshd-session[2150]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:04.548654 systemd[1]: sshd@3-145.40.90.175:22-147.75.109.163:56484.service: Deactivated successfully. Jul 7 06:12:04.549425 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:12:04.549904 systemd-logind[1913]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:12:04.550989 systemd[1]: Started sshd@4-145.40.90.175:22-147.75.109.163:56488.service - OpenSSH per-connection server daemon (147.75.109.163:56488). Jul 7 06:12:04.551612 systemd-logind[1913]: Removed session 6. Jul 7 06:12:04.581006 sshd[2158]: Accepted publickey for core from 147.75.109.163 port 56488 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:12:04.584218 sshd-session[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:04.596602 systemd-logind[1913]: New session 7 of user core. Jul 7 06:12:04.614109 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:12:04.675267 sshd[2161]: Connection closed by 147.75.109.163 port 56488 Jul 7 06:12:04.675860 sshd-session[2158]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:04.699603 systemd[1]: sshd@4-145.40.90.175:22-147.75.109.163:56488.service: Deactivated successfully. Jul 7 06:12:04.700348 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:12:04.700739 systemd-logind[1913]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:12:04.701694 systemd[1]: Started sshd@5-145.40.90.175:22-147.75.109.163:56494.service - OpenSSH per-connection server daemon (147.75.109.163:56494). Jul 7 06:12:04.702204 systemd-logind[1913]: Removed session 7. Jul 7 06:12:04.744073 sshd[2167]: Accepted publickey for core from 147.75.109.163 port 56494 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:12:04.744852 sshd-session[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:04.748228 systemd-logind[1913]: New session 8 of user core. Jul 7 06:12:04.763890 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:12:04.828684 sshd[2169]: Connection closed by 147.75.109.163 port 56494 Jul 7 06:12:04.829373 sshd-session[2167]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:04.847877 systemd[1]: sshd@5-145.40.90.175:22-147.75.109.163:56494.service: Deactivated successfully. Jul 7 06:12:04.851629 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:12:04.853979 systemd-logind[1913]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:12:04.860005 systemd[1]: Started sshd@6-145.40.90.175:22-147.75.109.163:56504.service - OpenSSH per-connection server daemon (147.75.109.163:56504). Jul 7 06:12:04.862165 systemd-logind[1913]: Removed session 8. Jul 7 06:12:04.939655 sshd[2175]: Accepted publickey for core from 147.75.109.163 port 56504 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:12:04.940664 sshd-session[2175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:04.945037 systemd-logind[1913]: New session 9 of user core. Jul 7 06:12:04.953852 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:12:05.020929 sudo[2178]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:12:05.021068 sudo[2178]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:12:05.031213 sudo[2178]: pam_unix(sudo:session): session closed for user root Jul 7 06:12:05.032216 sshd[2177]: Connection closed by 147.75.109.163 port 56504 Jul 7 06:12:05.032389 sshd-session[2175]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:05.052179 systemd[1]: sshd@6-145.40.90.175:22-147.75.109.163:56504.service: Deactivated successfully. Jul 7 06:12:05.053130 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:12:05.053656 systemd-logind[1913]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:12:05.055047 systemd[1]: Started sshd@7-145.40.90.175:22-147.75.109.163:56510.service - OpenSSH per-connection server daemon (147.75.109.163:56510). Jul 7 06:12:05.055845 systemd-logind[1913]: Removed session 9. Jul 7 06:12:05.094090 sshd[2184]: Accepted publickey for core from 147.75.109.163 port 56510 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:12:05.094696 sshd-session[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:05.097036 systemd-logind[1913]: New session 10 of user core. Jul 7 06:12:05.104902 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:12:05.158416 sudo[2189]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:12:05.158556 sudo[2189]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:12:05.161115 sudo[2189]: pam_unix(sudo:session): session closed for user root Jul 7 06:12:05.163652 sudo[2188]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:12:05.163791 sudo[2188]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:12:05.168981 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:12:05.202274 augenrules[2211]: No rules Jul 7 06:12:05.202833 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:12:05.203035 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:12:05.203897 sudo[2188]: pam_unix(sudo:session): session closed for user root Jul 7 06:12:05.205117 sshd[2187]: Connection closed by 147.75.109.163 port 56510 Jul 7 06:12:05.205416 sshd-session[2184]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:05.234583 systemd[1]: sshd@7-145.40.90.175:22-147.75.109.163:56510.service: Deactivated successfully. Jul 7 06:12:05.238480 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:12:05.240785 systemd-logind[1913]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:12:05.247255 systemd[1]: Started sshd@8-145.40.90.175:22-147.75.109.163:56518.service - OpenSSH per-connection server daemon (147.75.109.163:56518). Jul 7 06:12:05.249046 systemd-logind[1913]: Removed session 10. Jul 7 06:12:05.331538 sshd[2220]: Accepted publickey for core from 147.75.109.163 port 56518 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:12:05.332626 sshd-session[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:05.337001 systemd-logind[1913]: New session 11 of user core. Jul 7 06:12:05.350916 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:12:05.414078 sudo[2223]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:12:05.414903 sudo[2223]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:12:05.768759 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:12:05.783036 (dockerd)[2249]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:12:05.982240 dockerd[2249]: time="2025-07-07T06:12:05.982179014Z" level=info msg="Starting up" Jul 7 06:12:05.982932 dockerd[2249]: time="2025-07-07T06:12:05.982889808Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:12:06.009553 dockerd[2249]: time="2025-07-07T06:12:06.009506213Z" level=info msg="Loading containers: start." Jul 7 06:12:06.020696 kernel: Initializing XFRM netlink socket Jul 7 06:12:06.158272 systemd-timesyncd[1842]: Network configuration changed, trying to establish connection. Jul 7 06:12:06.177560 systemd-networkd[1840]: docker0: Link UP Jul 7 06:12:06.179326 dockerd[2249]: time="2025-07-07T06:12:06.179278573Z" level=info msg="Loading containers: done." Jul 7 06:12:06.185932 dockerd[2249]: time="2025-07-07T06:12:06.185886376Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:12:06.185932 dockerd[2249]: time="2025-07-07T06:12:06.185924317Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:12:06.186017 dockerd[2249]: time="2025-07-07T06:12:06.185974186Z" level=info msg="Initializing buildkit" Jul 7 06:12:06.196437 dockerd[2249]: time="2025-07-07T06:12:06.196397407Z" level=info msg="Completed buildkit initialization" Jul 7 06:12:06.199820 dockerd[2249]: time="2025-07-07T06:12:06.199790670Z" level=info msg="Daemon has completed initialization" Jul 7 06:12:06.199857 dockerd[2249]: time="2025-07-07T06:12:06.199829419Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:12:06.199936 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:12:06.655837 systemd-timesyncd[1842]: Contacted time server [2604:9a00:2100:af0e:57::123]:123 (2.flatcar.pool.ntp.org). Jul 7 06:12:06.655876 systemd-timesyncd[1842]: Initial clock synchronization to Mon 2025-07-07 06:12:06.808567 UTC. Jul 7 06:12:07.045453 containerd[1923]: time="2025-07-07T06:12:07.045318939Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 7 06:12:07.853050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3681235164.mount: Deactivated successfully. Jul 7 06:12:08.636565 containerd[1923]: time="2025-07-07T06:12:08.636538569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:08.636813 containerd[1923]: time="2025-07-07T06:12:08.636656478Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 7 06:12:08.637071 containerd[1923]: time="2025-07-07T06:12:08.637056450Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:08.638357 containerd[1923]: time="2025-07-07T06:12:08.638344657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:08.638865 containerd[1923]: time="2025-07-07T06:12:08.638852812Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.593496241s" Jul 7 06:12:08.638893 containerd[1923]: time="2025-07-07T06:12:08.638870935Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 7 06:12:08.639207 containerd[1923]: time="2025-07-07T06:12:08.639196120Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 7 06:12:09.698030 containerd[1923]: time="2025-07-07T06:12:09.698000438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:09.698301 containerd[1923]: time="2025-07-07T06:12:09.698224207Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 7 06:12:09.698692 containerd[1923]: time="2025-07-07T06:12:09.698677054Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:09.699834 containerd[1923]: time="2025-07-07T06:12:09.699809941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:09.700710 containerd[1923]: time="2025-07-07T06:12:09.700693042Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.061481458s" Jul 7 06:12:09.700736 containerd[1923]: time="2025-07-07T06:12:09.700710683Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 7 06:12:09.700978 containerd[1923]: time="2025-07-07T06:12:09.700959126Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 7 06:12:10.613987 containerd[1923]: time="2025-07-07T06:12:10.613960409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:10.614178 containerd[1923]: time="2025-07-07T06:12:10.614162489Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 7 06:12:10.614512 containerd[1923]: time="2025-07-07T06:12:10.614499298Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:10.615810 containerd[1923]: time="2025-07-07T06:12:10.615794654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:10.616328 containerd[1923]: time="2025-07-07T06:12:10.616313099Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 915.33717ms" Jul 7 06:12:10.616366 containerd[1923]: time="2025-07-07T06:12:10.616330389Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 7 06:12:10.616637 containerd[1923]: time="2025-07-07T06:12:10.616625493Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 7 06:12:11.391765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627929046.mount: Deactivated successfully. Jul 7 06:12:11.621530 containerd[1923]: time="2025-07-07T06:12:11.621475004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:11.621752 containerd[1923]: time="2025-07-07T06:12:11.621624047Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 7 06:12:11.622069 containerd[1923]: time="2025-07-07T06:12:11.622029437Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:11.622782 containerd[1923]: time="2025-07-07T06:12:11.622739099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:11.623094 containerd[1923]: time="2025-07-07T06:12:11.623057518Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.006417079s" Jul 7 06:12:11.623094 containerd[1923]: time="2025-07-07T06:12:11.623075393Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 7 06:12:11.623382 containerd[1923]: time="2025-07-07T06:12:11.623366220Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 7 06:12:12.131282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2422607146.mount: Deactivated successfully. Jul 7 06:12:12.712380 containerd[1923]: time="2025-07-07T06:12:12.712322903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:12.712608 containerd[1923]: time="2025-07-07T06:12:12.712449892Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 7 06:12:12.712909 containerd[1923]: time="2025-07-07T06:12:12.712871956Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:12.714256 containerd[1923]: time="2025-07-07T06:12:12.714212336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:12.714833 containerd[1923]: time="2025-07-07T06:12:12.714790405Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.091408974s" Jul 7 06:12:12.714833 containerd[1923]: time="2025-07-07T06:12:12.714806806Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 7 06:12:12.715145 containerd[1923]: time="2025-07-07T06:12:12.715086103Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:12:13.022880 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:12:13.024036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:12:13.284269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:12:13.286270 (kubelet)[2610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:12:13.306773 kubelet[2610]: E0707 06:12:13.306751 2610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:12:13.308214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:12:13.308307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:12:13.308496 systemd[1]: kubelet.service: Consumed 116ms CPU time, 120.1M memory peak. Jul 7 06:12:13.345962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount713500513.mount: Deactivated successfully. Jul 7 06:12:13.347161 containerd[1923]: time="2025-07-07T06:12:13.347143338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:12:13.347376 containerd[1923]: time="2025-07-07T06:12:13.347364583Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 06:12:13.347718 containerd[1923]: time="2025-07-07T06:12:13.347669919Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:12:13.348536 containerd[1923]: time="2025-07-07T06:12:13.348496504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:12:13.348887 containerd[1923]: time="2025-07-07T06:12:13.348850756Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 633.752188ms" Jul 7 06:12:13.348887 containerd[1923]: time="2025-07-07T06:12:13.348864696Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:12:13.349138 containerd[1923]: time="2025-07-07T06:12:13.349116073Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 7 06:12:13.797677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134064016.mount: Deactivated successfully. Jul 7 06:12:14.822442 containerd[1923]: time="2025-07-07T06:12:14.822379993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:14.822701 containerd[1923]: time="2025-07-07T06:12:14.822550290Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 7 06:12:14.822961 containerd[1923]: time="2025-07-07T06:12:14.822920699Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:14.824282 containerd[1923]: time="2025-07-07T06:12:14.824240509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:14.825320 containerd[1923]: time="2025-07-07T06:12:14.825278076Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.476144547s" Jul 7 06:12:14.825320 containerd[1923]: time="2025-07-07T06:12:14.825294894Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 7 06:12:16.769154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:12:16.769259 systemd[1]: kubelet.service: Consumed 116ms CPU time, 120.1M memory peak. Jul 7 06:12:16.770540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:12:16.784644 systemd[1]: Reload requested from client PID 2735 ('systemctl') (unit session-11.scope)... Jul 7 06:12:16.784654 systemd[1]: Reloading... Jul 7 06:12:16.822683 zram_generator::config[2779]: No configuration found. Jul 7 06:12:16.879841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:12:16.968276 systemd[1]: Reloading finished in 183 ms. Jul 7 06:12:17.019567 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:12:17.019610 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:12:17.019818 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:12:17.021059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:12:17.284357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:12:17.288332 (kubelet)[2846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:12:17.306993 kubelet[2846]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:12:17.306993 kubelet[2846]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:12:17.306993 kubelet[2846]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:12:17.307214 kubelet[2846]: I0707 06:12:17.307014 2846 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:12:17.764402 kubelet[2846]: I0707 06:12:17.764319 2846 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 06:12:17.764402 kubelet[2846]: I0707 06:12:17.764332 2846 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:12:17.764500 kubelet[2846]: I0707 06:12:17.764459 2846 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 06:12:17.789138 kubelet[2846]: I0707 06:12:17.789123 2846 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:12:17.790428 kubelet[2846]: E0707 06:12:17.790411 2846 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://145.40.90.175:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 145.40.90.175:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 06:12:17.793975 kubelet[2846]: I0707 06:12:17.793941 2846 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:12:17.803865 kubelet[2846]: I0707 06:12:17.803856 2846 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:12:17.804043 kubelet[2846]: I0707 06:12:17.804018 2846 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:12:17.804128 kubelet[2846]: I0707 06:12:17.804044 2846 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-a-2cf65e3e62","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:12:17.804184 kubelet[2846]: I0707 06:12:17.804132 2846 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:12:17.804184 kubelet[2846]: I0707 06:12:17.804138 2846 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 06:12:17.804217 kubelet[2846]: I0707 06:12:17.804208 2846 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:12:17.806445 kubelet[2846]: I0707 06:12:17.806437 2846 kubelet.go:480] "Attempting to sync node with API server" Jul 7 06:12:17.806467 kubelet[2846]: I0707 06:12:17.806447 2846 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:12:17.806467 kubelet[2846]: I0707 06:12:17.806462 2846 kubelet.go:386] "Adding apiserver pod source" Jul 7 06:12:17.806495 kubelet[2846]: I0707 06:12:17.806473 2846 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:12:17.810379 kubelet[2846]: I0707 06:12:17.810366 2846 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:12:17.810697 kubelet[2846]: I0707 06:12:17.810689 2846 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 06:12:17.812158 kubelet[2846]: W0707 06:12:17.812151 2846 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:12:17.812201 kubelet[2846]: E0707 06:12:17.812180 2846 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://145.40.90.175:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-2cf65e3e62&limit=500&resourceVersion=0\": dial tcp 145.40.90.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 06:12:17.813481 kubelet[2846]: I0707 06:12:17.813472 2846 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:12:17.813533 kubelet[2846]: I0707 06:12:17.813510 2846 server.go:1289] "Started kubelet" Jul 7 06:12:17.813593 kubelet[2846]: I0707 06:12:17.813566 2846 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:12:17.814508 kubelet[2846]: E0707 06:12:17.814492 2846 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:12:17.814618 kubelet[2846]: I0707 06:12:17.814610 2846 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:12:17.815183 kubelet[2846]: E0707 06:12:17.815166 2846 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://145.40.90.175:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 145.40.90.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 06:12:17.820566 kubelet[2846]: I0707 06:12:17.820551 2846 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:12:17.820566 kubelet[2846]: I0707 06:12:17.820560 2846 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:12:17.820611 kubelet[2846]: I0707 06:12:17.820551 2846 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:12:17.835936 kubelet[2846]: E0707 06:12:17.835923 2846 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-2cf65e3e62\" not found" Jul 7 06:12:17.835968 kubelet[2846]: E0707 06:12:17.835942 2846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-2cf65e3e62?timeout=10s\": dial tcp 145.40.90.175:6443: connect: connection refused" interval="200ms" Jul 7 06:12:17.835968 kubelet[2846]: I0707 06:12:17.835956 2846 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:12:17.836029 kubelet[2846]: I0707 06:12:17.835986 2846 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:12:17.836029 kubelet[2846]: I0707 06:12:17.836022 2846 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:12:17.836127 kubelet[2846]: I0707 06:12:17.836120 2846 factory.go:223] Registration of the systemd container factory successfully Jul 7 06:12:17.836176 kubelet[2846]: I0707 06:12:17.836167 2846 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:12:17.836225 kubelet[2846]: E0707 06:12:17.836212 2846 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://145.40.90.175:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 145.40.90.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 06:12:17.836594 kubelet[2846]: I0707 06:12:17.836586 2846 factory.go:223] Registration of the containerd container factory successfully Jul 7 06:12:17.836688 kubelet[2846]: I0707 06:12:17.836680 2846 server.go:317] "Adding debug handlers to kubelet server" Jul 7 06:12:17.837570 kubelet[2846]: E0707 06:12:17.836785 2846 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://145.40.90.175:6443/api/v1/namespaces/default/events\": dial tcp 145.40.90.175:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.1-a-2cf65e3e62.184fe355d27c0517 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-a-2cf65e3e62,UID:ci-4372.0.1-a-2cf65e3e62,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-a-2cf65e3e62,},FirstTimestamp:2025-07-07 06:12:17.813480727 +0000 UTC m=+0.522998930,LastTimestamp:2025-07-07 06:12:17.813480727 +0000 UTC m=+0.522998930,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-a-2cf65e3e62,}" Jul 7 06:12:17.842565 kubelet[2846]: I0707 06:12:17.842533 2846 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:12:17.842565 kubelet[2846]: I0707 06:12:17.842539 2846 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:12:17.842565 kubelet[2846]: I0707 06:12:17.842567 2846 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:12:17.843596 kubelet[2846]: I0707 06:12:17.843565 2846 policy_none.go:49] "None policy: Start" Jul 7 06:12:17.843596 kubelet[2846]: I0707 06:12:17.843591 2846 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:12:17.843596 kubelet[2846]: I0707 06:12:17.843597 2846 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:12:17.846150 kubelet[2846]: I0707 06:12:17.846129 2846 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 06:12:17.846737 kubelet[2846]: I0707 06:12:17.846725 2846 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 06:12:17.846737 kubelet[2846]: I0707 06:12:17.846738 2846 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 06:12:17.846802 kubelet[2846]: I0707 06:12:17.846750 2846 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:12:17.846802 kubelet[2846]: I0707 06:12:17.846754 2846 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 06:12:17.846802 kubelet[2846]: E0707 06:12:17.846775 2846 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:12:17.846966 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:12:17.847743 kubelet[2846]: E0707 06:12:17.847730 2846 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://145.40.90.175:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 145.40.90.175:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 06:12:17.860463 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:12:17.862242 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:12:17.875384 kubelet[2846]: E0707 06:12:17.875338 2846 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 06:12:17.875544 kubelet[2846]: I0707 06:12:17.875495 2846 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:12:17.875544 kubelet[2846]: I0707 06:12:17.875504 2846 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:12:17.875631 kubelet[2846]: I0707 06:12:17.875612 2846 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:12:17.876078 kubelet[2846]: E0707 06:12:17.876032 2846 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:12:17.876078 kubelet[2846]: E0707 06:12:17.876060 2846 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.1-a-2cf65e3e62\" not found" Jul 7 06:12:17.957197 systemd[1]: Created slice kubepods-burstable-pod0a96eecf6d5efd29225bca87877e3fd3.slice - libcontainer container kubepods-burstable-pod0a96eecf6d5efd29225bca87877e3fd3.slice. Jul 7 06:12:17.976719 kubelet[2846]: I0707 06:12:17.976637 2846 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:17.976901 kubelet[2846]: E0707 06:12:17.976854 2846 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.90.175:6443/api/v1/nodes\": dial tcp 145.40.90.175:6443: connect: connection refused" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:17.977079 kubelet[2846]: E0707 06:12:17.977046 2846 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-2cf65e3e62\" not found" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:17.978623 systemd[1]: Created slice kubepods-burstable-podf369499b6e4019270f458e57808df905.slice - libcontainer container kubepods-burstable-podf369499b6e4019270f458e57808df905.slice. Jul 7 06:12:17.999598 kubelet[2846]: E0707 06:12:17.999583 2846 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-2cf65e3e62\" not found" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.001754 systemd[1]: Created slice kubepods-burstable-pod915a21d42fe84994b99f1336f19b368d.slice - libcontainer container kubepods-burstable-pod915a21d42fe84994b99f1336f19b368d.slice. Jul 7 06:12:18.003092 kubelet[2846]: E0707 06:12:18.003079 2846 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-2cf65e3e62\" not found" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.037082 kubelet[2846]: I0707 06:12:18.036958 2846 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a96eecf6d5efd29225bca87877e3fd3-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-a-2cf65e3e62\" (UID: \"0a96eecf6d5efd29225bca87877e3fd3\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.037567 kubelet[2846]: E0707 06:12:18.037468 2846 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.90.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-2cf65e3e62?timeout=10s\": dial tcp 145.40.90.175:6443: connect: connection refused" interval="400ms" Jul 7 06:12:18.137356 kubelet[2846]: I0707 06:12:18.137218 2846 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f369499b6e4019270f458e57808df905-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-2cf65e3e62\" (UID: \"f369499b6e4019270f458e57808df905\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.137578 kubelet[2846]: I0707 06:12:18.137496 2846 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a96eecf6d5efd29225bca87877e3fd3-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-a-2cf65e3e62\" (UID: \"0a96eecf6d5efd29225bca87877e3fd3\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.137703 kubelet[2846]: I0707 06:12:18.137604 2846 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f369499b6e4019270f458e57808df905-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-a-2cf65e3e62\" (UID: \"f369499b6e4019270f458e57808df905\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.137854 kubelet[2846]: I0707 06:12:18.137729 2846 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f369499b6e4019270f458e57808df905-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-a-2cf65e3e62\" (UID: \"f369499b6e4019270f458e57808df905\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.137854 kubelet[2846]: I0707 06:12:18.137810 2846 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f369499b6e4019270f458e57808df905-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-a-2cf65e3e62\" (UID: \"f369499b6e4019270f458e57808df905\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.138061 kubelet[2846]: I0707 06:12:18.137880 2846 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/915a21d42fe84994b99f1336f19b368d-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-a-2cf65e3e62\" (UID: \"915a21d42fe84994b99f1336f19b368d\") " pod="kube-system/kube-scheduler-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.138061 kubelet[2846]: I0707 06:12:18.137991 2846 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a96eecf6d5efd29225bca87877e3fd3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-a-2cf65e3e62\" (UID: \"0a96eecf6d5efd29225bca87877e3fd3\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.138235 kubelet[2846]: I0707 06:12:18.138066 2846 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f369499b6e4019270f458e57808df905-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-2cf65e3e62\" (UID: \"f369499b6e4019270f458e57808df905\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.180563 kubelet[2846]: I0707 06:12:18.180475 2846 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.181234 kubelet[2846]: E0707 06:12:18.181133 2846 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.90.175:6443/api/v1/nodes\": dial tcp 145.40.90.175:6443: connect: connection refused" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.279241 containerd[1923]: time="2025-07-07T06:12:18.279114057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-a-2cf65e3e62,Uid:0a96eecf6d5efd29225bca87877e3fd3,Namespace:kube-system,Attempt:0,}" Jul 7 06:12:18.289542 containerd[1923]: time="2025-07-07T06:12:18.289478636Z" level=info msg="connecting to shim 7aa20c8fc6e5555823f91243e171480cd9cd78137a27017e1697b2cc9b4656d1" address="unix:///run/containerd/s/c409388a1d30a63f930f2b86e5a0127acb1e83a3491d34b70c870b82e110d2fe" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:18.300404 containerd[1923]: time="2025-07-07T06:12:18.300366643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-a-2cf65e3e62,Uid:f369499b6e4019270f458e57808df905,Namespace:kube-system,Attempt:0,}" Jul 7 06:12:18.303944 containerd[1923]: time="2025-07-07T06:12:18.303901077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-a-2cf65e3e62,Uid:915a21d42fe84994b99f1336f19b368d,Namespace:kube-system,Attempt:0,}" Jul 7 06:12:18.309457 containerd[1923]: time="2025-07-07T06:12:18.309433612Z" level=info msg="connecting to shim 31a97b0dcf862c603cfc10b9eada4c5285ffafffe543a8d3a6281046bd0c2618" address="unix:///run/containerd/s/ee532375459303b855c3089305ff71267cf8445369ac64f8ae779b2c9d760a84" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:18.309870 systemd[1]: Started cri-containerd-7aa20c8fc6e5555823f91243e171480cd9cd78137a27017e1697b2cc9b4656d1.scope - libcontainer container 7aa20c8fc6e5555823f91243e171480cd9cd78137a27017e1697b2cc9b4656d1. Jul 7 06:12:18.314926 containerd[1923]: time="2025-07-07T06:12:18.314894792Z" level=info msg="connecting to shim 88d2b614afa91472b18efc37a062875853108680e41537813486e70c673ba822" address="unix:///run/containerd/s/38b44c10d8ed863bd2c67e1f44ff1398464ea3ccae5d4d25f9f416c14a42ac0b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:18.321293 systemd[1]: Started cri-containerd-31a97b0dcf862c603cfc10b9eada4c5285ffafffe543a8d3a6281046bd0c2618.scope - libcontainer container 31a97b0dcf862c603cfc10b9eada4c5285ffafffe543a8d3a6281046bd0c2618. Jul 7 06:12:18.323565 systemd[1]: Started cri-containerd-88d2b614afa91472b18efc37a062875853108680e41537813486e70c673ba822.scope - libcontainer container 88d2b614afa91472b18efc37a062875853108680e41537813486e70c673ba822. Jul 7 06:12:18.337621 containerd[1923]: time="2025-07-07T06:12:18.337597397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-a-2cf65e3e62,Uid:0a96eecf6d5efd29225bca87877e3fd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7aa20c8fc6e5555823f91243e171480cd9cd78137a27017e1697b2cc9b4656d1\"" Jul 7 06:12:18.339735 containerd[1923]: time="2025-07-07T06:12:18.339723910Z" level=info msg="CreateContainer within sandbox \"7aa20c8fc6e5555823f91243e171480cd9cd78137a27017e1697b2cc9b4656d1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:12:18.342476 containerd[1923]: time="2025-07-07T06:12:18.342465296Z" level=info msg="Container 407eb190fad2ed4850db795dca0fe7c313338c5eab85a837cf4064c5790fb40b: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:18.345443 containerd[1923]: time="2025-07-07T06:12:18.345431139Z" level=info msg="CreateContainer within sandbox \"7aa20c8fc6e5555823f91243e171480cd9cd78137a27017e1697b2cc9b4656d1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"407eb190fad2ed4850db795dca0fe7c313338c5eab85a837cf4064c5790fb40b\"" Jul 7 06:12:18.345729 containerd[1923]: time="2025-07-07T06:12:18.345705201Z" level=info msg="StartContainer for \"407eb190fad2ed4850db795dca0fe7c313338c5eab85a837cf4064c5790fb40b\"" Jul 7 06:12:18.346230 containerd[1923]: time="2025-07-07T06:12:18.346220488Z" level=info msg="connecting to shim 407eb190fad2ed4850db795dca0fe7c313338c5eab85a837cf4064c5790fb40b" address="unix:///run/containerd/s/c409388a1d30a63f930f2b86e5a0127acb1e83a3491d34b70c870b82e110d2fe" protocol=ttrpc version=3 Jul 7 06:12:18.365027 systemd[1]: Started cri-containerd-407eb190fad2ed4850db795dca0fe7c313338c5eab85a837cf4064c5790fb40b.scope - libcontainer container 407eb190fad2ed4850db795dca0fe7c313338c5eab85a837cf4064c5790fb40b. Jul 7 06:12:18.366065 containerd[1923]: time="2025-07-07T06:12:18.366040967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-a-2cf65e3e62,Uid:f369499b6e4019270f458e57808df905,Namespace:kube-system,Attempt:0,} returns sandbox id \"31a97b0dcf862c603cfc10b9eada4c5285ffafffe543a8d3a6281046bd0c2618\"" Jul 7 06:12:18.366572 containerd[1923]: time="2025-07-07T06:12:18.366558467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-a-2cf65e3e62,Uid:915a21d42fe84994b99f1336f19b368d,Namespace:kube-system,Attempt:0,} returns sandbox id \"88d2b614afa91472b18efc37a062875853108680e41537813486e70c673ba822\"" Jul 7 06:12:18.368523 containerd[1923]: time="2025-07-07T06:12:18.368506981Z" level=info msg="CreateContainer within sandbox \"31a97b0dcf862c603cfc10b9eada4c5285ffafffe543a8d3a6281046bd0c2618\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:12:18.368931 containerd[1923]: time="2025-07-07T06:12:18.368916837Z" level=info msg="CreateContainer within sandbox \"88d2b614afa91472b18efc37a062875853108680e41537813486e70c673ba822\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:12:18.371667 containerd[1923]: time="2025-07-07T06:12:18.371652823Z" level=info msg="Container 7135bef96cda06cd4406bfc9921fd1ca0a2cbfd43fbb5f0632846c33fb38658b: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:18.372200 containerd[1923]: time="2025-07-07T06:12:18.372189387Z" level=info msg="Container 7e987ab7d5c1b9c85935d7f17282b155b67b763281a56cd2bd6de6e300cb2943: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:18.374741 containerd[1923]: time="2025-07-07T06:12:18.374729211Z" level=info msg="CreateContainer within sandbox \"88d2b614afa91472b18efc37a062875853108680e41537813486e70c673ba822\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7e987ab7d5c1b9c85935d7f17282b155b67b763281a56cd2bd6de6e300cb2943\"" Jul 7 06:12:18.374948 containerd[1923]: time="2025-07-07T06:12:18.374933472Z" level=info msg="StartContainer for \"7e987ab7d5c1b9c85935d7f17282b155b67b763281a56cd2bd6de6e300cb2943\"" Jul 7 06:12:18.374987 containerd[1923]: time="2025-07-07T06:12:18.374965626Z" level=info msg="CreateContainer within sandbox \"31a97b0dcf862c603cfc10b9eada4c5285ffafffe543a8d3a6281046bd0c2618\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7135bef96cda06cd4406bfc9921fd1ca0a2cbfd43fbb5f0632846c33fb38658b\"" Jul 7 06:12:18.375156 containerd[1923]: time="2025-07-07T06:12:18.375144651Z" level=info msg="StartContainer for \"7135bef96cda06cd4406bfc9921fd1ca0a2cbfd43fbb5f0632846c33fb38658b\"" Jul 7 06:12:18.375460 containerd[1923]: time="2025-07-07T06:12:18.375448057Z" level=info msg="connecting to shim 7e987ab7d5c1b9c85935d7f17282b155b67b763281a56cd2bd6de6e300cb2943" address="unix:///run/containerd/s/38b44c10d8ed863bd2c67e1f44ff1398464ea3ccae5d4d25f9f416c14a42ac0b" protocol=ttrpc version=3 Jul 7 06:12:18.375631 containerd[1923]: time="2025-07-07T06:12:18.375620255Z" level=info msg="connecting to shim 7135bef96cda06cd4406bfc9921fd1ca0a2cbfd43fbb5f0632846c33fb38658b" address="unix:///run/containerd/s/ee532375459303b855c3089305ff71267cf8445369ac64f8ae779b2c9d760a84" protocol=ttrpc version=3 Jul 7 06:12:18.390759 systemd[1]: Started cri-containerd-7135bef96cda06cd4406bfc9921fd1ca0a2cbfd43fbb5f0632846c33fb38658b.scope - libcontainer container 7135bef96cda06cd4406bfc9921fd1ca0a2cbfd43fbb5f0632846c33fb38658b. Jul 7 06:12:18.391486 systemd[1]: Started cri-containerd-7e987ab7d5c1b9c85935d7f17282b155b67b763281a56cd2bd6de6e300cb2943.scope - libcontainer container 7e987ab7d5c1b9c85935d7f17282b155b67b763281a56cd2bd6de6e300cb2943. Jul 7 06:12:18.395088 containerd[1923]: time="2025-07-07T06:12:18.395060966Z" level=info msg="StartContainer for \"407eb190fad2ed4850db795dca0fe7c313338c5eab85a837cf4064c5790fb40b\" returns successfully" Jul 7 06:12:18.422599 containerd[1923]: time="2025-07-07T06:12:18.422570024Z" level=info msg="StartContainer for \"7e987ab7d5c1b9c85935d7f17282b155b67b763281a56cd2bd6de6e300cb2943\" returns successfully" Jul 7 06:12:18.422762 containerd[1923]: time="2025-07-07T06:12:18.422663666Z" level=info msg="StartContainer for \"7135bef96cda06cd4406bfc9921fd1ca0a2cbfd43fbb5f0632846c33fb38658b\" returns successfully" Jul 7 06:12:18.583232 kubelet[2846]: I0707 06:12:18.583164 2846 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.850082 kubelet[2846]: E0707 06:12:18.850016 2846 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-2cf65e3e62\" not found" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.850339 kubelet[2846]: E0707 06:12:18.850326 2846 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-2cf65e3e62\" not found" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.850912 kubelet[2846]: E0707 06:12:18.850903 2846 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-2cf65e3e62\" not found" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:18.987390 kubelet[2846]: E0707 06:12:18.987330 2846 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.1-a-2cf65e3e62\" not found" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:19.087479 kubelet[2846]: I0707 06:12:19.087459 2846 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:19.087560 kubelet[2846]: E0707 06:12:19.087487 2846 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372.0.1-a-2cf65e3e62\": node \"ci-4372.0.1-a-2cf65e3e62\" not found" Jul 7 06:12:19.136523 kubelet[2846]: I0707 06:12:19.136456 2846 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:19.140387 kubelet[2846]: E0707 06:12:19.140374 2846 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-a-2cf65e3e62\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:19.140387 kubelet[2846]: I0707 06:12:19.140386 2846 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:19.141246 kubelet[2846]: E0707 06:12:19.141215 2846 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-a-2cf65e3e62\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:19.141246 kubelet[2846]: I0707 06:12:19.141243 2846 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:19.142060 kubelet[2846]: E0707 06:12:19.142010 2846 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-a-2cf65e3e62\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:19.809988 kubelet[2846]: I0707 06:12:19.809906 2846 apiserver.go:52] "Watching apiserver" Jul 7 06:12:19.836634 kubelet[2846]: I0707 06:12:19.836585 2846 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:12:19.853042 kubelet[2846]: I0707 06:12:19.852968 2846 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:19.853309 kubelet[2846]: I0707 06:12:19.853230 2846 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:19.853609 kubelet[2846]: I0707 06:12:19.853568 2846 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:19.856841 kubelet[2846]: E0707 06:12:19.856790 2846 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-a-2cf65e3e62\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:19.857775 kubelet[2846]: E0707 06:12:19.857721 2846 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-a-2cf65e3e62\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:19.857775 kubelet[2846]: E0707 06:12:19.857753 2846 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-a-2cf65e3e62\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:21.255305 systemd[1]: Reload requested from client PID 3172 ('systemctl') (unit session-11.scope)... Jul 7 06:12:21.255312 systemd[1]: Reloading... Jul 7 06:12:21.290725 zram_generator::config[3217]: No configuration found. Jul 7 06:12:21.347770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:12:21.444774 systemd[1]: Reloading finished in 189 ms. Jul 7 06:12:21.463467 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:12:21.476360 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:12:21.476492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:12:21.476519 systemd[1]: kubelet.service: Consumed 914ms CPU time, 135.5M memory peak. Jul 7 06:12:21.477513 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:12:21.768058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:12:21.770712 (kubelet)[3281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:12:21.796404 kubelet[3281]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:12:21.796404 kubelet[3281]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:12:21.796404 kubelet[3281]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:12:21.796639 kubelet[3281]: I0707 06:12:21.796442 3281 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:12:21.800387 kubelet[3281]: I0707 06:12:21.800342 3281 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 06:12:21.800387 kubelet[3281]: I0707 06:12:21.800354 3281 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:12:21.800467 kubelet[3281]: I0707 06:12:21.800461 3281 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 06:12:21.801159 kubelet[3281]: I0707 06:12:21.801116 3281 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 06:12:21.802267 kubelet[3281]: I0707 06:12:21.802258 3281 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:12:21.804019 kubelet[3281]: I0707 06:12:21.803987 3281 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:12:21.811169 kubelet[3281]: I0707 06:12:21.811129 3281 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:12:21.811273 kubelet[3281]: I0707 06:12:21.811230 3281 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:12:21.811351 kubelet[3281]: I0707 06:12:21.811245 3281 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-a-2cf65e3e62","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:12:21.811351 kubelet[3281]: I0707 06:12:21.811328 3281 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:12:21.811351 kubelet[3281]: I0707 06:12:21.811334 3281 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 06:12:21.811436 kubelet[3281]: I0707 06:12:21.811359 3281 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:12:21.811472 kubelet[3281]: I0707 06:12:21.811467 3281 kubelet.go:480] "Attempting to sync node with API server" Jul 7 06:12:21.811490 kubelet[3281]: I0707 06:12:21.811474 3281 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:12:21.811490 kubelet[3281]: I0707 06:12:21.811485 3281 kubelet.go:386] "Adding apiserver pod source" Jul 7 06:12:21.811520 kubelet[3281]: I0707 06:12:21.811493 3281 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:12:21.811948 kubelet[3281]: I0707 06:12:21.811934 3281 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:12:21.812223 kubelet[3281]: I0707 06:12:21.812215 3281 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 06:12:21.813357 kubelet[3281]: I0707 06:12:21.813348 3281 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:12:21.813402 kubelet[3281]: I0707 06:12:21.813369 3281 server.go:1289] "Started kubelet" Jul 7 06:12:21.813432 kubelet[3281]: I0707 06:12:21.813402 3281 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:12:21.813463 kubelet[3281]: I0707 06:12:21.813413 3281 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:12:21.813635 kubelet[3281]: I0707 06:12:21.813625 3281 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:12:21.814182 kubelet[3281]: I0707 06:12:21.814171 3281 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:12:21.814224 kubelet[3281]: I0707 06:12:21.814189 3281 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:12:21.815177 kubelet[3281]: I0707 06:12:21.815042 3281 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:12:21.815177 kubelet[3281]: E0707 06:12:21.815135 3281 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-2cf65e3e62\" not found" Jul 7 06:12:21.815293 kubelet[3281]: I0707 06:12:21.815273 3281 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:12:21.815485 kubelet[3281]: I0707 06:12:21.815405 3281 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:12:21.815783 kubelet[3281]: I0707 06:12:21.815768 3281 server.go:317] "Adding debug handlers to kubelet server" Jul 7 06:12:21.816016 kubelet[3281]: E0707 06:12:21.815997 3281 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:12:21.816116 kubelet[3281]: I0707 06:12:21.816054 3281 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:12:21.816939 kubelet[3281]: I0707 06:12:21.816927 3281 factory.go:223] Registration of the containerd container factory successfully Jul 7 06:12:21.816939 kubelet[3281]: I0707 06:12:21.816937 3281 factory.go:223] Registration of the systemd container factory successfully Jul 7 06:12:21.819702 kubelet[3281]: I0707 06:12:21.819683 3281 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 06:12:21.822717 kubelet[3281]: I0707 06:12:21.822702 3281 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 06:12:21.822717 kubelet[3281]: I0707 06:12:21.822714 3281 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 06:12:21.822820 kubelet[3281]: I0707 06:12:21.822727 3281 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:12:21.822820 kubelet[3281]: I0707 06:12:21.822732 3281 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 06:12:21.822820 kubelet[3281]: E0707 06:12:21.822758 3281 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:12:21.830873 kubelet[3281]: I0707 06:12:21.830831 3281 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:12:21.830873 kubelet[3281]: I0707 06:12:21.830840 3281 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:12:21.830873 kubelet[3281]: I0707 06:12:21.830850 3281 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:12:21.830981 kubelet[3281]: I0707 06:12:21.830921 3281 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:12:21.830981 kubelet[3281]: I0707 06:12:21.830926 3281 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:12:21.830981 kubelet[3281]: I0707 06:12:21.830935 3281 policy_none.go:49] "None policy: Start" Jul 7 06:12:21.830981 kubelet[3281]: I0707 06:12:21.830940 3281 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:12:21.830981 kubelet[3281]: I0707 06:12:21.830945 3281 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:12:21.831049 kubelet[3281]: I0707 06:12:21.830995 3281 state_mem.go:75] "Updated machine memory state" Jul 7 06:12:21.832808 kubelet[3281]: E0707 06:12:21.832799 3281 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 06:12:21.832892 kubelet[3281]: I0707 06:12:21.832885 3281 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:12:21.832916 kubelet[3281]: I0707 06:12:21.832894 3281 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:12:21.833000 kubelet[3281]: I0707 06:12:21.832993 3281 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:12:21.833313 kubelet[3281]: E0707 06:12:21.833303 3281 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:12:21.925135 kubelet[3281]: I0707 06:12:21.925060 3281 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:21.925417 kubelet[3281]: I0707 06:12:21.925379 3281 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:21.925681 kubelet[3281]: I0707 06:12:21.925450 3281 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:21.933353 kubelet[3281]: I0707 06:12:21.933281 3281 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:12:21.933724 kubelet[3281]: I0707 06:12:21.933289 3281 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:12:21.933724 kubelet[3281]: I0707 06:12:21.933564 3281 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:12:21.938717 kubelet[3281]: I0707 06:12:21.938698 3281 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:21.941768 kubelet[3281]: I0707 06:12:21.941757 3281 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:21.941829 kubelet[3281]: I0707 06:12:21.941789 3281 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.116617 kubelet[3281]: I0707 06:12:22.116525 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f369499b6e4019270f458e57808df905-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-2cf65e3e62\" (UID: \"f369499b6e4019270f458e57808df905\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.116986 kubelet[3281]: I0707 06:12:22.116634 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a96eecf6d5efd29225bca87877e3fd3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-a-2cf65e3e62\" (UID: \"0a96eecf6d5efd29225bca87877e3fd3\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.116986 kubelet[3281]: I0707 06:12:22.116749 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f369499b6e4019270f458e57808df905-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-2cf65e3e62\" (UID: \"f369499b6e4019270f458e57808df905\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.116986 kubelet[3281]: I0707 06:12:22.116806 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f369499b6e4019270f458e57808df905-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-a-2cf65e3e62\" (UID: \"f369499b6e4019270f458e57808df905\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.116986 kubelet[3281]: I0707 06:12:22.116859 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f369499b6e4019270f458e57808df905-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-a-2cf65e3e62\" (UID: \"f369499b6e4019270f458e57808df905\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.116986 kubelet[3281]: I0707 06:12:22.116917 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/915a21d42fe84994b99f1336f19b368d-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-a-2cf65e3e62\" (UID: \"915a21d42fe84994b99f1336f19b368d\") " pod="kube-system/kube-scheduler-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.117502 kubelet[3281]: I0707 06:12:22.116967 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a96eecf6d5efd29225bca87877e3fd3-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-a-2cf65e3e62\" (UID: \"0a96eecf6d5efd29225bca87877e3fd3\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.117502 kubelet[3281]: I0707 06:12:22.117056 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a96eecf6d5efd29225bca87877e3fd3-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-a-2cf65e3e62\" (UID: \"0a96eecf6d5efd29225bca87877e3fd3\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.117502 kubelet[3281]: I0707 06:12:22.117112 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f369499b6e4019270f458e57808df905-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-a-2cf65e3e62\" (UID: \"f369499b6e4019270f458e57808df905\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.812510 kubelet[3281]: I0707 06:12:22.812481 3281 apiserver.go:52] "Watching apiserver" Jul 7 06:12:22.816185 kubelet[3281]: I0707 06:12:22.816139 3281 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:12:22.827584 kubelet[3281]: I0707 06:12:22.827560 3281 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.827745 kubelet[3281]: I0707 06:12:22.827639 3281 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.827745 kubelet[3281]: I0707 06:12:22.827700 3281 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.830778 kubelet[3281]: I0707 06:12:22.830763 3281 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:12:22.830890 kubelet[3281]: E0707 06:12:22.830797 3281 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-a-2cf65e3e62\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.831046 kubelet[3281]: I0707 06:12:22.831028 3281 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:12:22.831046 kubelet[3281]: I0707 06:12:22.831037 3281 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jul 7 06:12:22.831134 kubelet[3281]: E0707 06:12:22.831062 3281 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-a-2cf65e3e62\" already exists" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.831134 kubelet[3281]: E0707 06:12:22.831069 3281 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-a-2cf65e3e62\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:22.843354 kubelet[3281]: I0707 06:12:22.843308 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-2cf65e3e62" podStartSLOduration=1.843293154 podStartE2EDuration="1.843293154s" podCreationTimestamp="2025-07-07 06:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:12:22.839275723 +0000 UTC m=+1.066249519" watchObservedRunningTime="2025-07-07 06:12:22.843293154 +0000 UTC m=+1.070266948" Jul 7 06:12:22.863075 kubelet[3281]: I0707 06:12:22.863023 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.1-a-2cf65e3e62" podStartSLOduration=1.863000472 podStartE2EDuration="1.863000472s" podCreationTimestamp="2025-07-07 06:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:12:22.843418395 +0000 UTC m=+1.070392191" watchObservedRunningTime="2025-07-07 06:12:22.863000472 +0000 UTC m=+1.089974264" Jul 7 06:12:22.868238 kubelet[3281]: I0707 06:12:22.868211 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.1-a-2cf65e3e62" podStartSLOduration=1.8682015440000002 podStartE2EDuration="1.868201544s" podCreationTimestamp="2025-07-07 06:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:12:22.863001336 +0000 UTC m=+1.089975133" watchObservedRunningTime="2025-07-07 06:12:22.868201544 +0000 UTC m=+1.095175337" Jul 7 06:12:27.481989 kubelet[3281]: I0707 06:12:27.481886 3281 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:12:27.483034 containerd[1923]: time="2025-07-07T06:12:27.482601124Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:12:27.483627 kubelet[3281]: I0707 06:12:27.483042 3281 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:12:28.159909 systemd[1]: Created slice kubepods-besteffort-pod1ca01462_0b32_4319_bb00_af8b7102bfc2.slice - libcontainer container kubepods-besteffort-pod1ca01462_0b32_4319_bb00_af8b7102bfc2.slice. Jul 7 06:12:28.260037 kubelet[3281]: I0707 06:12:28.259964 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ca01462-0b32-4319-bb00-af8b7102bfc2-lib-modules\") pod \"kube-proxy-sxsl8\" (UID: \"1ca01462-0b32-4319-bb00-af8b7102bfc2\") " pod="kube-system/kube-proxy-sxsl8" Jul 7 06:12:28.260333 kubelet[3281]: I0707 06:12:28.260062 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2xdc\" (UniqueName: \"kubernetes.io/projected/1ca01462-0b32-4319-bb00-af8b7102bfc2-kube-api-access-m2xdc\") pod \"kube-proxy-sxsl8\" (UID: \"1ca01462-0b32-4319-bb00-af8b7102bfc2\") " pod="kube-system/kube-proxy-sxsl8" Jul 7 06:12:28.260333 kubelet[3281]: I0707 06:12:28.260129 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ca01462-0b32-4319-bb00-af8b7102bfc2-xtables-lock\") pod \"kube-proxy-sxsl8\" (UID: \"1ca01462-0b32-4319-bb00-af8b7102bfc2\") " pod="kube-system/kube-proxy-sxsl8" Jul 7 06:12:28.260333 kubelet[3281]: I0707 06:12:28.260181 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ca01462-0b32-4319-bb00-af8b7102bfc2-kube-proxy\") pod \"kube-proxy-sxsl8\" (UID: \"1ca01462-0b32-4319-bb00-af8b7102bfc2\") " pod="kube-system/kube-proxy-sxsl8" Jul 7 06:12:28.307380 systemd[1]: Created slice kubepods-besteffort-pod06b91e3c_2205_4986_a751_179f692fb702.slice - libcontainer container kubepods-besteffort-pod06b91e3c_2205_4986_a751_179f692fb702.slice. Jul 7 06:12:28.361009 kubelet[3281]: I0707 06:12:28.360926 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbzg\" (UniqueName: \"kubernetes.io/projected/06b91e3c-2205-4986-a751-179f692fb702-kube-api-access-dkbzg\") pod \"tigera-operator-747864d56d-mp7bd\" (UID: \"06b91e3c-2205-4986-a751-179f692fb702\") " pod="tigera-operator/tigera-operator-747864d56d-mp7bd" Jul 7 06:12:28.361396 kubelet[3281]: I0707 06:12:28.361332 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/06b91e3c-2205-4986-a751-179f692fb702-var-lib-calico\") pod \"tigera-operator-747864d56d-mp7bd\" (UID: \"06b91e3c-2205-4986-a751-179f692fb702\") " pod="tigera-operator/tigera-operator-747864d56d-mp7bd" Jul 7 06:12:28.472022 containerd[1923]: time="2025-07-07T06:12:28.471790723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sxsl8,Uid:1ca01462-0b32-4319-bb00-af8b7102bfc2,Namespace:kube-system,Attempt:0,}" Jul 7 06:12:28.610870 containerd[1923]: time="2025-07-07T06:12:28.610779682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-mp7bd,Uid:06b91e3c-2205-4986-a751-179f692fb702,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:12:28.897047 containerd[1923]: time="2025-07-07T06:12:28.896976017Z" level=info msg="connecting to shim e49b732e827cd853090a2dac31a0732d7b4b87eeebe155609218830afa9312a1" address="unix:///run/containerd/s/7fbb342a65d01442bc6a2e3055c5c03d73059e200368721bd38f3580b518cf2c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:28.900579 containerd[1923]: time="2025-07-07T06:12:28.900528677Z" level=info msg="connecting to shim 49b971690e245cff42aeaa48d160e3e595e1ecbe4c7fc95cc478c20ae9579254" address="unix:///run/containerd/s/bc4c8de9fc287ad5ffa4712ffaa5f24cc45824f063379e23099085d0d196b859" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:28.922941 systemd[1]: Started cri-containerd-e49b732e827cd853090a2dac31a0732d7b4b87eeebe155609218830afa9312a1.scope - libcontainer container e49b732e827cd853090a2dac31a0732d7b4b87eeebe155609218830afa9312a1. Jul 7 06:12:28.925241 systemd[1]: Started cri-containerd-49b971690e245cff42aeaa48d160e3e595e1ecbe4c7fc95cc478c20ae9579254.scope - libcontainer container 49b971690e245cff42aeaa48d160e3e595e1ecbe4c7fc95cc478c20ae9579254. Jul 7 06:12:28.937240 containerd[1923]: time="2025-07-07T06:12:28.937164640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sxsl8,Uid:1ca01462-0b32-4319-bb00-af8b7102bfc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e49b732e827cd853090a2dac31a0732d7b4b87eeebe155609218830afa9312a1\"" Jul 7 06:12:28.939677 containerd[1923]: time="2025-07-07T06:12:28.939630537Z" level=info msg="CreateContainer within sandbox \"e49b732e827cd853090a2dac31a0732d7b4b87eeebe155609218830afa9312a1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:12:28.943698 containerd[1923]: time="2025-07-07T06:12:28.943667744Z" level=info msg="Container e8a6c1565d8b10e52a90b574606ee7d98914b5211e1797c6d8229638614c3bf8: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:28.946840 containerd[1923]: time="2025-07-07T06:12:28.946794559Z" level=info msg="CreateContainer within sandbox \"e49b732e827cd853090a2dac31a0732d7b4b87eeebe155609218830afa9312a1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e8a6c1565d8b10e52a90b574606ee7d98914b5211e1797c6d8229638614c3bf8\"" Jul 7 06:12:28.947094 containerd[1923]: time="2025-07-07T06:12:28.947041867Z" level=info msg="StartContainer for \"e8a6c1565d8b10e52a90b574606ee7d98914b5211e1797c6d8229638614c3bf8\"" Jul 7 06:12:28.947995 containerd[1923]: time="2025-07-07T06:12:28.947932076Z" level=info msg="connecting to shim e8a6c1565d8b10e52a90b574606ee7d98914b5211e1797c6d8229638614c3bf8" address="unix:///run/containerd/s/7fbb342a65d01442bc6a2e3055c5c03d73059e200368721bd38f3580b518cf2c" protocol=ttrpc version=3 Jul 7 06:12:28.965848 systemd[1]: Started cri-containerd-e8a6c1565d8b10e52a90b574606ee7d98914b5211e1797c6d8229638614c3bf8.scope - libcontainer container e8a6c1565d8b10e52a90b574606ee7d98914b5211e1797c6d8229638614c3bf8. Jul 7 06:12:28.966997 containerd[1923]: time="2025-07-07T06:12:28.966944833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-mp7bd,Uid:06b91e3c-2205-4986-a751-179f692fb702,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"49b971690e245cff42aeaa48d160e3e595e1ecbe4c7fc95cc478c20ae9579254\"" Jul 7 06:12:28.967637 containerd[1923]: time="2025-07-07T06:12:28.967625724Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:12:28.985987 containerd[1923]: time="2025-07-07T06:12:28.985965610Z" level=info msg="StartContainer for \"e8a6c1565d8b10e52a90b574606ee7d98914b5211e1797c6d8229638614c3bf8\" returns successfully" Jul 7 06:12:30.317826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088337376.mount: Deactivated successfully. Jul 7 06:12:30.383077 kubelet[3281]: I0707 06:12:30.383030 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sxsl8" podStartSLOduration=2.383015792 podStartE2EDuration="2.383015792s" podCreationTimestamp="2025-07-07 06:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:12:29.874777651 +0000 UTC m=+8.101751511" watchObservedRunningTime="2025-07-07 06:12:30.383015792 +0000 UTC m=+8.609989581" Jul 7 06:12:30.554745 containerd[1923]: time="2025-07-07T06:12:30.554688027Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:30.555016 containerd[1923]: time="2025-07-07T06:12:30.554975331Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 06:12:30.555441 containerd[1923]: time="2025-07-07T06:12:30.555398837Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:30.556647 containerd[1923]: time="2025-07-07T06:12:30.556600279Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:30.556868 containerd[1923]: time="2025-07-07T06:12:30.556825890Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.589184418s" Jul 7 06:12:30.556868 containerd[1923]: time="2025-07-07T06:12:30.556842566Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 06:12:30.558329 containerd[1923]: time="2025-07-07T06:12:30.558318038Z" level=info msg="CreateContainer within sandbox \"49b971690e245cff42aeaa48d160e3e595e1ecbe4c7fc95cc478c20ae9579254\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:12:30.560981 containerd[1923]: time="2025-07-07T06:12:30.560946219Z" level=info msg="Container 5319372463d6abe1d42a4b0ba2013dfa0a5ef213bcf76313301ebb26e55b41ba: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:30.563286 containerd[1923]: time="2025-07-07T06:12:30.563250405Z" level=info msg="CreateContainer within sandbox \"49b971690e245cff42aeaa48d160e3e595e1ecbe4c7fc95cc478c20ae9579254\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5319372463d6abe1d42a4b0ba2013dfa0a5ef213bcf76313301ebb26e55b41ba\"" Jul 7 06:12:30.563460 containerd[1923]: time="2025-07-07T06:12:30.563450023Z" level=info msg="StartContainer for \"5319372463d6abe1d42a4b0ba2013dfa0a5ef213bcf76313301ebb26e55b41ba\"" Jul 7 06:12:30.564592 containerd[1923]: time="2025-07-07T06:12:30.564454123Z" level=info msg="connecting to shim 5319372463d6abe1d42a4b0ba2013dfa0a5ef213bcf76313301ebb26e55b41ba" address="unix:///run/containerd/s/bc4c8de9fc287ad5ffa4712ffaa5f24cc45824f063379e23099085d0d196b859" protocol=ttrpc version=3 Jul 7 06:12:30.583937 systemd[1]: Started cri-containerd-5319372463d6abe1d42a4b0ba2013dfa0a5ef213bcf76313301ebb26e55b41ba.scope - libcontainer container 5319372463d6abe1d42a4b0ba2013dfa0a5ef213bcf76313301ebb26e55b41ba. Jul 7 06:12:30.596220 containerd[1923]: time="2025-07-07T06:12:30.596156124Z" level=info msg="StartContainer for \"5319372463d6abe1d42a4b0ba2013dfa0a5ef213bcf76313301ebb26e55b41ba\" returns successfully" Jul 7 06:12:30.894478 kubelet[3281]: I0707 06:12:30.894226 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-mp7bd" podStartSLOduration=1.304429487 podStartE2EDuration="2.894188251s" podCreationTimestamp="2025-07-07 06:12:28 +0000 UTC" firstStartedPulling="2025-07-07 06:12:28.967455285 +0000 UTC m=+7.194429074" lastFinishedPulling="2025-07-07 06:12:30.557214047 +0000 UTC m=+8.784187838" observedRunningTime="2025-07-07 06:12:30.877611229 +0000 UTC m=+9.104585090" watchObservedRunningTime="2025-07-07 06:12:30.894188251 +0000 UTC m=+9.121162095" Jul 7 06:12:34.820771 update_engine[1918]: I20250707 06:12:34.820706 1918 update_attempter.cc:509] Updating boot flags... Jul 7 06:12:35.344739 sudo[2223]: pam_unix(sudo:session): session closed for user root Jul 7 06:12:35.345523 sshd[2222]: Connection closed by 147.75.109.163 port 56518 Jul 7 06:12:35.345718 sshd-session[2220]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:35.347923 systemd[1]: sshd@8-145.40.90.175:22-147.75.109.163:56518.service: Deactivated successfully. Jul 7 06:12:35.349359 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:12:35.349548 systemd[1]: session-11.scope: Consumed 3.612s CPU time, 237.7M memory peak. Jul 7 06:12:35.351568 systemd-logind[1913]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:12:35.353019 systemd-logind[1913]: Removed session 11. Jul 7 06:12:37.585414 systemd[1]: Created slice kubepods-besteffort-podf017431f_f3ff_4522_833b_c93ff6322423.slice - libcontainer container kubepods-besteffort-podf017431f_f3ff_4522_833b_c93ff6322423.slice. Jul 7 06:12:37.627597 kubelet[3281]: I0707 06:12:37.627566 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f017431f-f3ff-4522-833b-c93ff6322423-tigera-ca-bundle\") pod \"calico-typha-fc6569587-l94vf\" (UID: \"f017431f-f3ff-4522-833b-c93ff6322423\") " pod="calico-system/calico-typha-fc6569587-l94vf" Jul 7 06:12:37.627597 kubelet[3281]: I0707 06:12:37.627597 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrtsc\" (UniqueName: \"kubernetes.io/projected/f017431f-f3ff-4522-833b-c93ff6322423-kube-api-access-rrtsc\") pod \"calico-typha-fc6569587-l94vf\" (UID: \"f017431f-f3ff-4522-833b-c93ff6322423\") " pod="calico-system/calico-typha-fc6569587-l94vf" Jul 7 06:12:37.627597 kubelet[3281]: I0707 06:12:37.627611 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f017431f-f3ff-4522-833b-c93ff6322423-typha-certs\") pod \"calico-typha-fc6569587-l94vf\" (UID: \"f017431f-f3ff-4522-833b-c93ff6322423\") " pod="calico-system/calico-typha-fc6569587-l94vf" Jul 7 06:12:37.889071 containerd[1923]: time="2025-07-07T06:12:37.888849318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fc6569587-l94vf,Uid:f017431f-f3ff-4522-833b-c93ff6322423,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:37.897075 containerd[1923]: time="2025-07-07T06:12:37.897010463Z" level=info msg="connecting to shim bc21650a42388431ae0593d0b9b2819a46b2514443a1380cb90d983da69ced77" address="unix:///run/containerd/s/1e461f7911fab05ad59a81ff0ea5e5c5d2fbe139ef1c1c187ee35aa1db2f19b8" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:37.907342 systemd[1]: Created slice kubepods-besteffort-poda6bd70ed_f95b_4e72_8ef7_2f1184763810.slice - libcontainer container kubepods-besteffort-poda6bd70ed_f95b_4e72_8ef7_2f1184763810.slice. Jul 7 06:12:37.921796 systemd[1]: Started cri-containerd-bc21650a42388431ae0593d0b9b2819a46b2514443a1380cb90d983da69ced77.scope - libcontainer container bc21650a42388431ae0593d0b9b2819a46b2514443a1380cb90d983da69ced77. Jul 7 06:12:37.929361 kubelet[3281]: I0707 06:12:37.929316 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a6bd70ed-f95b-4e72-8ef7-2f1184763810-node-certs\") pod \"calico-node-j5cn9\" (UID: \"a6bd70ed-f95b-4e72-8ef7-2f1184763810\") " pod="calico-system/calico-node-j5cn9" Jul 7 06:12:37.929361 kubelet[3281]: I0707 06:12:37.929339 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a6bd70ed-f95b-4e72-8ef7-2f1184763810-policysync\") pod \"calico-node-j5cn9\" (UID: \"a6bd70ed-f95b-4e72-8ef7-2f1184763810\") " pod="calico-system/calico-node-j5cn9" Jul 7 06:12:37.929361 kubelet[3281]: I0707 06:12:37.929357 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv7tv\" (UniqueName: \"kubernetes.io/projected/a6bd70ed-f95b-4e72-8ef7-2f1184763810-kube-api-access-qv7tv\") pod \"calico-node-j5cn9\" (UID: \"a6bd70ed-f95b-4e72-8ef7-2f1184763810\") " pod="calico-system/calico-node-j5cn9" Jul 7 06:12:37.929507 kubelet[3281]: I0707 06:12:37.929390 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a6bd70ed-f95b-4e72-8ef7-2f1184763810-var-lib-calico\") pod \"calico-node-j5cn9\" (UID: \"a6bd70ed-f95b-4e72-8ef7-2f1184763810\") " pod="calico-system/calico-node-j5cn9" Jul 7 06:12:37.929507 kubelet[3281]: I0707 06:12:37.929419 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a6bd70ed-f95b-4e72-8ef7-2f1184763810-cni-bin-dir\") pod \"calico-node-j5cn9\" (UID: \"a6bd70ed-f95b-4e72-8ef7-2f1184763810\") " pod="calico-system/calico-node-j5cn9" Jul 7 06:12:37.929507 kubelet[3281]: I0707 06:12:37.929436 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a6bd70ed-f95b-4e72-8ef7-2f1184763810-cni-log-dir\") pod \"calico-node-j5cn9\" (UID: \"a6bd70ed-f95b-4e72-8ef7-2f1184763810\") " pod="calico-system/calico-node-j5cn9" Jul 7 06:12:37.929507 kubelet[3281]: I0707 06:12:37.929450 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6bd70ed-f95b-4e72-8ef7-2f1184763810-xtables-lock\") pod \"calico-node-j5cn9\" (UID: \"a6bd70ed-f95b-4e72-8ef7-2f1184763810\") " pod="calico-system/calico-node-j5cn9" Jul 7 06:12:37.929507 kubelet[3281]: I0707 06:12:37.929468 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a6bd70ed-f95b-4e72-8ef7-2f1184763810-flexvol-driver-host\") pod \"calico-node-j5cn9\" (UID: \"a6bd70ed-f95b-4e72-8ef7-2f1184763810\") " pod="calico-system/calico-node-j5cn9" Jul 7 06:12:37.929621 kubelet[3281]: I0707 06:12:37.929485 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6bd70ed-f95b-4e72-8ef7-2f1184763810-tigera-ca-bundle\") pod \"calico-node-j5cn9\" (UID: \"a6bd70ed-f95b-4e72-8ef7-2f1184763810\") " pod="calico-system/calico-node-j5cn9" Jul 7 06:12:37.929621 kubelet[3281]: I0707 06:12:37.929501 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a6bd70ed-f95b-4e72-8ef7-2f1184763810-cni-net-dir\") pod \"calico-node-j5cn9\" (UID: \"a6bd70ed-f95b-4e72-8ef7-2f1184763810\") " pod="calico-system/calico-node-j5cn9" Jul 7 06:12:37.929621 kubelet[3281]: I0707 06:12:37.929515 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a6bd70ed-f95b-4e72-8ef7-2f1184763810-var-run-calico\") pod \"calico-node-j5cn9\" (UID: \"a6bd70ed-f95b-4e72-8ef7-2f1184763810\") " pod="calico-system/calico-node-j5cn9" Jul 7 06:12:37.929621 kubelet[3281]: I0707 06:12:37.929530 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6bd70ed-f95b-4e72-8ef7-2f1184763810-lib-modules\") pod \"calico-node-j5cn9\" (UID: \"a6bd70ed-f95b-4e72-8ef7-2f1184763810\") " pod="calico-system/calico-node-j5cn9" Jul 7 06:12:37.947035 containerd[1923]: time="2025-07-07T06:12:37.947010391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fc6569587-l94vf,Uid:f017431f-f3ff-4522-833b-c93ff6322423,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc21650a42388431ae0593d0b9b2819a46b2514443a1380cb90d983da69ced77\"" Jul 7 06:12:37.947726 containerd[1923]: time="2025-07-07T06:12:37.947711382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:12:38.032256 kubelet[3281]: E0707 06:12:38.032173 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.032498 kubelet[3281]: W0707 06:12:38.032263 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.032498 kubelet[3281]: E0707 06:12:38.032333 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.032901 kubelet[3281]: E0707 06:12:38.032856 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.032901 kubelet[3281]: W0707 06:12:38.032888 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.033234 kubelet[3281]: E0707 06:12:38.032925 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.033562 kubelet[3281]: E0707 06:12:38.033514 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.033758 kubelet[3281]: W0707 06:12:38.033561 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.033758 kubelet[3281]: E0707 06:12:38.033597 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.038366 kubelet[3281]: E0707 06:12:38.038272 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.038366 kubelet[3281]: W0707 06:12:38.038310 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.038366 kubelet[3281]: E0707 06:12:38.038341 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.049770 kubelet[3281]: E0707 06:12:38.049681 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.049770 kubelet[3281]: W0707 06:12:38.049716 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.049770 kubelet[3281]: E0707 06:12:38.049750 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.210094 containerd[1923]: time="2025-07-07T06:12:38.209893424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j5cn9,Uid:a6bd70ed-f95b-4e72-8ef7-2f1184763810,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:38.212795 kubelet[3281]: E0707 06:12:38.212767 3281 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rgmw9" podUID="3b3041ea-8291-4b09-bfe8-d14c2bd2ab11" Jul 7 06:12:38.219308 containerd[1923]: time="2025-07-07T06:12:38.219256914Z" level=info msg="connecting to shim 8522e6653572dfb4222a4a6a74dae9b922f4685b713fb033c4bf5a752c32386a" address="unix:///run/containerd/s/e0d2a143668e637d0593fc67bc05e6445d4ddbbf3e69e065fb67aab0828817cf" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:38.220745 kubelet[3281]: E0707 06:12:38.220732 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.220783 kubelet[3281]: W0707 06:12:38.220743 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.220783 kubelet[3281]: E0707 06:12:38.220758 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.220868 kubelet[3281]: E0707 06:12:38.220862 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.220868 kubelet[3281]: W0707 06:12:38.220867 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.220913 kubelet[3281]: E0707 06:12:38.220871 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.220944 kubelet[3281]: E0707 06:12:38.220938 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.220944 kubelet[3281]: W0707 06:12:38.220943 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.220983 kubelet[3281]: E0707 06:12:38.220947 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.221044 kubelet[3281]: E0707 06:12:38.221038 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.221044 kubelet[3281]: W0707 06:12:38.221043 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.221085 kubelet[3281]: E0707 06:12:38.221047 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.221119 kubelet[3281]: E0707 06:12:38.221113 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.221119 kubelet[3281]: W0707 06:12:38.221118 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.221155 kubelet[3281]: E0707 06:12:38.221122 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.221184 kubelet[3281]: E0707 06:12:38.221178 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.221184 kubelet[3281]: W0707 06:12:38.221182 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.221220 kubelet[3281]: E0707 06:12:38.221187 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.221249 kubelet[3281]: E0707 06:12:38.221244 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.221269 kubelet[3281]: W0707 06:12:38.221250 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.221269 kubelet[3281]: E0707 06:12:38.221254 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.221317 kubelet[3281]: E0707 06:12:38.221312 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.221317 kubelet[3281]: W0707 06:12:38.221316 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.221355 kubelet[3281]: E0707 06:12:38.221321 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.221387 kubelet[3281]: E0707 06:12:38.221382 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.221387 kubelet[3281]: W0707 06:12:38.221386 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.221424 kubelet[3281]: E0707 06:12:38.221391 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.221453 kubelet[3281]: E0707 06:12:38.221448 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.221453 kubelet[3281]: W0707 06:12:38.221453 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.221492 kubelet[3281]: E0707 06:12:38.221457 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.221531 kubelet[3281]: E0707 06:12:38.221526 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.221552 kubelet[3281]: W0707 06:12:38.221531 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.221552 kubelet[3281]: E0707 06:12:38.221535 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.221615 kubelet[3281]: E0707 06:12:38.221609 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.221615 kubelet[3281]: W0707 06:12:38.221614 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.221660 kubelet[3281]: E0707 06:12:38.221618 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.221699 kubelet[3281]: E0707 06:12:38.221694 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.221716 kubelet[3281]: W0707 06:12:38.221699 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.221716 kubelet[3281]: E0707 06:12:38.221703 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.221768 kubelet[3281]: E0707 06:12:38.221764 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.221788 kubelet[3281]: W0707 06:12:38.221768 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.221788 kubelet[3281]: E0707 06:12:38.221772 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.221835 kubelet[3281]: E0707 06:12:38.221830 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.221835 kubelet[3281]: W0707 06:12:38.221835 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.221868 kubelet[3281]: E0707 06:12:38.221839 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.221900 kubelet[3281]: E0707 06:12:38.221895 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.221900 kubelet[3281]: W0707 06:12:38.221900 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.221934 kubelet[3281]: E0707 06:12:38.221904 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.221973 kubelet[3281]: E0707 06:12:38.221968 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.221973 kubelet[3281]: W0707 06:12:38.221973 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.222006 kubelet[3281]: E0707 06:12:38.221977 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.222039 kubelet[3281]: E0707 06:12:38.222035 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.222039 kubelet[3281]: W0707 06:12:38.222039 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.222076 kubelet[3281]: E0707 06:12:38.222043 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.222104 kubelet[3281]: E0707 06:12:38.222099 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.222104 kubelet[3281]: W0707 06:12:38.222104 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.222135 kubelet[3281]: E0707 06:12:38.222108 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.222169 kubelet[3281]: E0707 06:12:38.222164 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.222169 kubelet[3281]: W0707 06:12:38.222168 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.222205 kubelet[3281]: E0707 06:12:38.222172 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.233085 kubelet[3281]: E0707 06:12:38.233068 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.233085 kubelet[3281]: W0707 06:12:38.233081 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.233184 kubelet[3281]: E0707 06:12:38.233094 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.233184 kubelet[3281]: I0707 06:12:38.233113 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3b3041ea-8291-4b09-bfe8-d14c2bd2ab11-registration-dir\") pod \"csi-node-driver-rgmw9\" (UID: \"3b3041ea-8291-4b09-bfe8-d14c2bd2ab11\") " pod="calico-system/csi-node-driver-rgmw9" Jul 7 06:12:38.233252 kubelet[3281]: E0707 06:12:38.233244 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.233252 kubelet[3281]: W0707 06:12:38.233250 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.233308 kubelet[3281]: E0707 06:12:38.233259 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.233308 kubelet[3281]: I0707 06:12:38.233273 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b3041ea-8291-4b09-bfe8-d14c2bd2ab11-kubelet-dir\") pod \"csi-node-driver-rgmw9\" (UID: \"3b3041ea-8291-4b09-bfe8-d14c2bd2ab11\") " pod="calico-system/csi-node-driver-rgmw9" Jul 7 06:12:38.233378 kubelet[3281]: E0707 06:12:38.233370 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.233378 kubelet[3281]: W0707 06:12:38.233376 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.233435 kubelet[3281]: E0707 06:12:38.233384 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.233435 kubelet[3281]: I0707 06:12:38.233397 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3b3041ea-8291-4b09-bfe8-d14c2bd2ab11-varrun\") pod \"csi-node-driver-rgmw9\" (UID: \"3b3041ea-8291-4b09-bfe8-d14c2bd2ab11\") " pod="calico-system/csi-node-driver-rgmw9" Jul 7 06:12:38.233488 kubelet[3281]: E0707 06:12:38.233481 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.233508 kubelet[3281]: W0707 06:12:38.233488 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.233508 kubelet[3281]: E0707 06:12:38.233496 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.233564 kubelet[3281]: E0707 06:12:38.233559 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.233583 kubelet[3281]: W0707 06:12:38.233564 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.233583 kubelet[3281]: E0707 06:12:38.233569 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.233723 kubelet[3281]: E0707 06:12:38.233649 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.233723 kubelet[3281]: W0707 06:12:38.233655 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.233723 kubelet[3281]: E0707 06:12:38.233659 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.233723 kubelet[3281]: E0707 06:12:38.233720 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.233723 kubelet[3281]: W0707 06:12:38.233724 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.233815 kubelet[3281]: E0707 06:12:38.233728 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.233815 kubelet[3281]: E0707 06:12:38.233788 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.233815 kubelet[3281]: W0707 06:12:38.233792 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.233815 kubelet[3281]: E0707 06:12:38.233796 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.233815 kubelet[3281]: I0707 06:12:38.233808 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3b3041ea-8291-4b09-bfe8-d14c2bd2ab11-socket-dir\") pod \"csi-node-driver-rgmw9\" (UID: \"3b3041ea-8291-4b09-bfe8-d14c2bd2ab11\") " pod="calico-system/csi-node-driver-rgmw9" Jul 7 06:12:38.233922 kubelet[3281]: E0707 06:12:38.233915 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.233947 kubelet[3281]: W0707 06:12:38.233922 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.233947 kubelet[3281]: E0707 06:12:38.233928 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.233997 kubelet[3281]: E0707 06:12:38.233992 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.234019 kubelet[3281]: W0707 06:12:38.233997 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.234019 kubelet[3281]: E0707 06:12:38.234002 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.234113 kubelet[3281]: E0707 06:12:38.234108 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.234139 kubelet[3281]: W0707 06:12:38.234113 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.234139 kubelet[3281]: E0707 06:12:38.234117 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.234139 kubelet[3281]: I0707 06:12:38.234127 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klzts\" (UniqueName: \"kubernetes.io/projected/3b3041ea-8291-4b09-bfe8-d14c2bd2ab11-kube-api-access-klzts\") pod \"csi-node-driver-rgmw9\" (UID: \"3b3041ea-8291-4b09-bfe8-d14c2bd2ab11\") " pod="calico-system/csi-node-driver-rgmw9" Jul 7 06:12:38.234225 kubelet[3281]: E0707 06:12:38.234217 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.234258 kubelet[3281]: W0707 06:12:38.234226 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.234258 kubelet[3281]: E0707 06:12:38.234234 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.234318 kubelet[3281]: E0707 06:12:38.234311 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.234347 kubelet[3281]: W0707 06:12:38.234317 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.234347 kubelet[3281]: E0707 06:12:38.234324 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.234418 kubelet[3281]: E0707 06:12:38.234412 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.234418 kubelet[3281]: W0707 06:12:38.234417 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.234476 kubelet[3281]: E0707 06:12:38.234424 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.234505 kubelet[3281]: E0707 06:12:38.234495 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.234505 kubelet[3281]: W0707 06:12:38.234501 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.234544 kubelet[3281]: E0707 06:12:38.234507 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.244934 systemd[1]: Started cri-containerd-8522e6653572dfb4222a4a6a74dae9b922f4685b713fb033c4bf5a752c32386a.scope - libcontainer container 8522e6653572dfb4222a4a6a74dae9b922f4685b713fb033c4bf5a752c32386a. Jul 7 06:12:38.255579 containerd[1923]: time="2025-07-07T06:12:38.255560114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j5cn9,Uid:a6bd70ed-f95b-4e72-8ef7-2f1184763810,Namespace:calico-system,Attempt:0,} returns sandbox id \"8522e6653572dfb4222a4a6a74dae9b922f4685b713fb033c4bf5a752c32386a\"" Jul 7 06:12:38.335087 kubelet[3281]: E0707 06:12:38.335035 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.335087 kubelet[3281]: W0707 06:12:38.335054 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.335087 kubelet[3281]: E0707 06:12:38.335071 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.335291 kubelet[3281]: E0707 06:12:38.335272 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.335291 kubelet[3281]: W0707 06:12:38.335284 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.335363 kubelet[3281]: E0707 06:12:38.335295 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.335552 kubelet[3281]: E0707 06:12:38.335508 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.335552 kubelet[3281]: W0707 06:12:38.335521 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.335552 kubelet[3281]: E0707 06:12:38.335532 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.335730 kubelet[3281]: E0707 06:12:38.335669 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.335730 kubelet[3281]: W0707 06:12:38.335677 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.335730 kubelet[3281]: E0707 06:12:38.335685 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.335890 kubelet[3281]: E0707 06:12:38.335845 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.335890 kubelet[3281]: W0707 06:12:38.335857 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.335890 kubelet[3281]: E0707 06:12:38.335867 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.336121 kubelet[3281]: E0707 06:12:38.336080 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.336121 kubelet[3281]: W0707 06:12:38.336090 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.336121 kubelet[3281]: E0707 06:12:38.336101 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.336308 kubelet[3281]: E0707 06:12:38.336257 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.336308 kubelet[3281]: W0707 06:12:38.336266 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.336308 kubelet[3281]: E0707 06:12:38.336275 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.336497 kubelet[3281]: E0707 06:12:38.336460 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.336497 kubelet[3281]: W0707 06:12:38.336471 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.336497 kubelet[3281]: E0707 06:12:38.336482 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.336627 kubelet[3281]: E0707 06:12:38.336617 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.336627 kubelet[3281]: W0707 06:12:38.336626 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.336708 kubelet[3281]: E0707 06:12:38.336635 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.336781 kubelet[3281]: E0707 06:12:38.336770 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.336781 kubelet[3281]: W0707 06:12:38.336779 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.336867 kubelet[3281]: E0707 06:12:38.336788 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.336963 kubelet[3281]: E0707 06:12:38.336953 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.336963 kubelet[3281]: W0707 06:12:38.336962 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.337051 kubelet[3281]: E0707 06:12:38.336971 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.337127 kubelet[3281]: E0707 06:12:38.337117 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.337127 kubelet[3281]: W0707 06:12:38.337128 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.337201 kubelet[3281]: E0707 06:12:38.337137 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.337336 kubelet[3281]: E0707 06:12:38.337321 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.337374 kubelet[3281]: W0707 06:12:38.337336 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.337374 kubelet[3281]: E0707 06:12:38.337349 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.337494 kubelet[3281]: E0707 06:12:38.337483 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.337494 kubelet[3281]: W0707 06:12:38.337493 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.337581 kubelet[3281]: E0707 06:12:38.337502 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.337684 kubelet[3281]: E0707 06:12:38.337672 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.337684 kubelet[3281]: W0707 06:12:38.337682 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.337806 kubelet[3281]: E0707 06:12:38.337692 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.337902 kubelet[3281]: E0707 06:12:38.337885 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.337971 kubelet[3281]: W0707 06:12:38.337901 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.337971 kubelet[3281]: E0707 06:12:38.337919 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.338096 kubelet[3281]: E0707 06:12:38.338085 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.338096 kubelet[3281]: W0707 06:12:38.338095 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.338177 kubelet[3281]: E0707 06:12:38.338105 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.338285 kubelet[3281]: E0707 06:12:38.338275 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.338285 kubelet[3281]: W0707 06:12:38.338285 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.338370 kubelet[3281]: E0707 06:12:38.338294 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.338425 kubelet[3281]: E0707 06:12:38.338414 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.338425 kubelet[3281]: W0707 06:12:38.338424 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.338505 kubelet[3281]: E0707 06:12:38.338434 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.338618 kubelet[3281]: E0707 06:12:38.338608 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.338618 kubelet[3281]: W0707 06:12:38.338618 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.338703 kubelet[3281]: E0707 06:12:38.338627 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.338909 kubelet[3281]: E0707 06:12:38.338896 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.338909 kubelet[3281]: W0707 06:12:38.338907 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.338995 kubelet[3281]: E0707 06:12:38.338918 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.339096 kubelet[3281]: E0707 06:12:38.339086 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.339136 kubelet[3281]: W0707 06:12:38.339096 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.339136 kubelet[3281]: E0707 06:12:38.339105 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.339263 kubelet[3281]: E0707 06:12:38.339252 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.339263 kubelet[3281]: W0707 06:12:38.339263 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.339349 kubelet[3281]: E0707 06:12:38.339275 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.339453 kubelet[3281]: E0707 06:12:38.339442 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.339453 kubelet[3281]: W0707 06:12:38.339452 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.339529 kubelet[3281]: E0707 06:12:38.339460 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.339662 kubelet[3281]: E0707 06:12:38.339637 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.339662 kubelet[3281]: W0707 06:12:38.339655 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.339662 kubelet[3281]: E0707 06:12:38.339666 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:38.347105 kubelet[3281]: E0707 06:12:38.347076 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:38.347105 kubelet[3281]: W0707 06:12:38.347098 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:38.347271 kubelet[3281]: E0707 06:12:38.347116 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:39.684946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2605099799.mount: Deactivated successfully. Jul 7 06:12:39.823793 kubelet[3281]: E0707 06:12:39.823769 3281 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rgmw9" podUID="3b3041ea-8291-4b09-bfe8-d14c2bd2ab11" Jul 7 06:12:40.023138 containerd[1923]: time="2025-07-07T06:12:40.023117938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:40.023398 containerd[1923]: time="2025-07-07T06:12:40.023346870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 06:12:40.023740 containerd[1923]: time="2025-07-07T06:12:40.023726189Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:40.024540 containerd[1923]: time="2025-07-07T06:12:40.024525364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:40.024816 containerd[1923]: time="2025-07-07T06:12:40.024799885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.077068333s" Jul 7 06:12:40.024851 containerd[1923]: time="2025-07-07T06:12:40.024817599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 06:12:40.025225 containerd[1923]: time="2025-07-07T06:12:40.025216134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:12:40.029050 containerd[1923]: time="2025-07-07T06:12:40.028995855Z" level=info msg="CreateContainer within sandbox \"bc21650a42388431ae0593d0b9b2819a46b2514443a1380cb90d983da69ced77\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:12:40.031596 containerd[1923]: time="2025-07-07T06:12:40.031583297Z" level=info msg="Container 1c312b167ae2fb9951f06ac75b1dc9cab01c51428a31c9c3edd30468496b1a05: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:40.034245 containerd[1923]: time="2025-07-07T06:12:40.034209894Z" level=info msg="CreateContainer within sandbox \"bc21650a42388431ae0593d0b9b2819a46b2514443a1380cb90d983da69ced77\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1c312b167ae2fb9951f06ac75b1dc9cab01c51428a31c9c3edd30468496b1a05\"" Jul 7 06:12:40.034427 containerd[1923]: time="2025-07-07T06:12:40.034413438Z" level=info msg="StartContainer for \"1c312b167ae2fb9951f06ac75b1dc9cab01c51428a31c9c3edd30468496b1a05\"" Jul 7 06:12:40.034963 containerd[1923]: time="2025-07-07T06:12:40.034922426Z" level=info msg="connecting to shim 1c312b167ae2fb9951f06ac75b1dc9cab01c51428a31c9c3edd30468496b1a05" address="unix:///run/containerd/s/1e461f7911fab05ad59a81ff0ea5e5c5d2fbe139ef1c1c187ee35aa1db2f19b8" protocol=ttrpc version=3 Jul 7 06:12:40.060080 systemd[1]: Started cri-containerd-1c312b167ae2fb9951f06ac75b1dc9cab01c51428a31c9c3edd30468496b1a05.scope - libcontainer container 1c312b167ae2fb9951f06ac75b1dc9cab01c51428a31c9c3edd30468496b1a05. Jul 7 06:12:40.143307 containerd[1923]: time="2025-07-07T06:12:40.143272211Z" level=info msg="StartContainer for \"1c312b167ae2fb9951f06ac75b1dc9cab01c51428a31c9c3edd30468496b1a05\" returns successfully" Jul 7 06:12:40.882107 kubelet[3281]: I0707 06:12:40.882068 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fc6569587-l94vf" podStartSLOduration=1.8044526109999999 podStartE2EDuration="3.882055626s" podCreationTimestamp="2025-07-07 06:12:37 +0000 UTC" firstStartedPulling="2025-07-07 06:12:37.947565251 +0000 UTC m=+16.174539043" lastFinishedPulling="2025-07-07 06:12:40.025168269 +0000 UTC m=+18.252142058" observedRunningTime="2025-07-07 06:12:40.881512735 +0000 UTC m=+19.108486529" watchObservedRunningTime="2025-07-07 06:12:40.882055626 +0000 UTC m=+19.109029422" Jul 7 06:12:40.937613 kubelet[3281]: E0707 06:12:40.937518 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.937613 kubelet[3281]: W0707 06:12:40.937565 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.937613 kubelet[3281]: E0707 06:12:40.937605 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.938200 kubelet[3281]: E0707 06:12:40.938105 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.938200 kubelet[3281]: W0707 06:12:40.938138 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.938200 kubelet[3281]: E0707 06:12:40.938170 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.938675 kubelet[3281]: E0707 06:12:40.938617 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.938828 kubelet[3281]: W0707 06:12:40.938676 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.938828 kubelet[3281]: E0707 06:12:40.938708 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.939247 kubelet[3281]: E0707 06:12:40.939173 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.939247 kubelet[3281]: W0707 06:12:40.939196 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.939247 kubelet[3281]: E0707 06:12:40.939220 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.939635 kubelet[3281]: E0707 06:12:40.939609 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.939635 kubelet[3281]: W0707 06:12:40.939632 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.939877 kubelet[3281]: E0707 06:12:40.939688 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.940163 kubelet[3281]: E0707 06:12:40.940099 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.940163 kubelet[3281]: W0707 06:12:40.940128 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.940163 kubelet[3281]: E0707 06:12:40.940156 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.940578 kubelet[3281]: E0707 06:12:40.940533 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.940578 kubelet[3281]: W0707 06:12:40.940556 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.940578 kubelet[3281]: E0707 06:12:40.940577 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.941052 kubelet[3281]: E0707 06:12:40.941025 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.941052 kubelet[3281]: W0707 06:12:40.941051 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.941231 kubelet[3281]: E0707 06:12:40.941076 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.941511 kubelet[3281]: E0707 06:12:40.941486 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.941511 kubelet[3281]: W0707 06:12:40.941508 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.941745 kubelet[3281]: E0707 06:12:40.941530 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.941977 kubelet[3281]: E0707 06:12:40.941925 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.941977 kubelet[3281]: W0707 06:12:40.941950 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.941977 kubelet[3281]: E0707 06:12:40.941975 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.942415 kubelet[3281]: E0707 06:12:40.942368 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.942415 kubelet[3281]: W0707 06:12:40.942392 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.942415 kubelet[3281]: E0707 06:12:40.942416 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.942858 kubelet[3281]: E0707 06:12:40.942809 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.942858 kubelet[3281]: W0707 06:12:40.942840 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.943061 kubelet[3281]: E0707 06:12:40.942867 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.943231 kubelet[3281]: E0707 06:12:40.943207 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.943231 kubelet[3281]: W0707 06:12:40.943229 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.943412 kubelet[3281]: E0707 06:12:40.943251 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.943615 kubelet[3281]: E0707 06:12:40.943589 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.943615 kubelet[3281]: W0707 06:12:40.943611 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.943835 kubelet[3281]: E0707 06:12:40.943632 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.944052 kubelet[3281]: E0707 06:12:40.944023 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.944167 kubelet[3281]: W0707 06:12:40.944055 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.944167 kubelet[3281]: E0707 06:12:40.944087 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.957194 kubelet[3281]: E0707 06:12:40.957106 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.957194 kubelet[3281]: W0707 06:12:40.957144 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.957194 kubelet[3281]: E0707 06:12:40.957177 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.957684 kubelet[3281]: E0707 06:12:40.957624 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.957684 kubelet[3281]: W0707 06:12:40.957673 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.957904 kubelet[3281]: E0707 06:12:40.957708 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.958297 kubelet[3281]: E0707 06:12:40.958218 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.958297 kubelet[3281]: W0707 06:12:40.958252 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.958297 kubelet[3281]: E0707 06:12:40.958286 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.958855 kubelet[3281]: E0707 06:12:40.958761 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.958855 kubelet[3281]: W0707 06:12:40.958787 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.958855 kubelet[3281]: E0707 06:12:40.958813 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.959256 kubelet[3281]: E0707 06:12:40.959236 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.959358 kubelet[3281]: W0707 06:12:40.959263 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.959358 kubelet[3281]: E0707 06:12:40.959289 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.959783 kubelet[3281]: E0707 06:12:40.959699 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.959783 kubelet[3281]: W0707 06:12:40.959727 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.959783 kubelet[3281]: E0707 06:12:40.959754 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.960252 kubelet[3281]: E0707 06:12:40.960206 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.960252 kubelet[3281]: W0707 06:12:40.960233 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.960456 kubelet[3281]: E0707 06:12:40.960260 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.960759 kubelet[3281]: E0707 06:12:40.960676 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.960759 kubelet[3281]: W0707 06:12:40.960703 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.960759 kubelet[3281]: E0707 06:12:40.960726 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.961352 kubelet[3281]: E0707 06:12:40.961309 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.961466 kubelet[3281]: W0707 06:12:40.961356 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.961466 kubelet[3281]: E0707 06:12:40.961394 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.961918 kubelet[3281]: E0707 06:12:40.961882 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.962049 kubelet[3281]: W0707 06:12:40.961919 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.962049 kubelet[3281]: E0707 06:12:40.961953 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.962440 kubelet[3281]: E0707 06:12:40.962413 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.962538 kubelet[3281]: W0707 06:12:40.962438 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.962538 kubelet[3281]: E0707 06:12:40.962463 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.962939 kubelet[3281]: E0707 06:12:40.962874 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.962939 kubelet[3281]: W0707 06:12:40.962900 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.962939 kubelet[3281]: E0707 06:12:40.962928 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.963356 kubelet[3281]: E0707 06:12:40.963330 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.963464 kubelet[3281]: W0707 06:12:40.963356 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.963464 kubelet[3281]: E0707 06:12:40.963383 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.963862 kubelet[3281]: E0707 06:12:40.963836 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.963862 kubelet[3281]: W0707 06:12:40.963860 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.964082 kubelet[3281]: E0707 06:12:40.963885 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.964341 kubelet[3281]: E0707 06:12:40.964289 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.964341 kubelet[3281]: W0707 06:12:40.964312 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.964341 kubelet[3281]: E0707 06:12:40.964335 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.964809 kubelet[3281]: E0707 06:12:40.964761 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.964809 kubelet[3281]: W0707 06:12:40.964785 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.964809 kubelet[3281]: E0707 06:12:40.964810 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.965446 kubelet[3281]: E0707 06:12:40.965394 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.965446 kubelet[3281]: W0707 06:12:40.965427 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.965671 kubelet[3281]: E0707 06:12:40.965461 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:40.965932 kubelet[3281]: E0707 06:12:40.965878 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:12:40.965932 kubelet[3281]: W0707 06:12:40.965903 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:12:40.965932 kubelet[3281]: E0707 06:12:40.965929 3281 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:12:41.455145 containerd[1923]: time="2025-07-07T06:12:41.455091529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:41.455357 containerd[1923]: time="2025-07-07T06:12:41.455292849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 06:12:41.455690 containerd[1923]: time="2025-07-07T06:12:41.455665672Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:41.456477 containerd[1923]: time="2025-07-07T06:12:41.456435989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:41.456830 containerd[1923]: time="2025-07-07T06:12:41.456789914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.431559629s" Jul 7 06:12:41.456830 containerd[1923]: time="2025-07-07T06:12:41.456805802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 06:12:41.458299 containerd[1923]: time="2025-07-07T06:12:41.458289035Z" level=info msg="CreateContainer within sandbox \"8522e6653572dfb4222a4a6a74dae9b922f4685b713fb033c4bf5a752c32386a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:12:41.461460 containerd[1923]: time="2025-07-07T06:12:41.461446313Z" level=info msg="Container 3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:41.464888 containerd[1923]: time="2025-07-07T06:12:41.464845800Z" level=info msg="CreateContainer within sandbox \"8522e6653572dfb4222a4a6a74dae9b922f4685b713fb033c4bf5a752c32386a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241\"" Jul 7 06:12:41.465058 containerd[1923]: time="2025-07-07T06:12:41.465019193Z" level=info msg="StartContainer for \"3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241\"" Jul 7 06:12:41.465984 containerd[1923]: time="2025-07-07T06:12:41.465947013Z" level=info msg="connecting to shim 3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241" address="unix:///run/containerd/s/e0d2a143668e637d0593fc67bc05e6445d4ddbbf3e69e065fb67aab0828817cf" protocol=ttrpc version=3 Jul 7 06:12:41.490942 systemd[1]: Started cri-containerd-3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241.scope - libcontainer container 3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241. Jul 7 06:12:41.512337 containerd[1923]: time="2025-07-07T06:12:41.512304412Z" level=info msg="StartContainer for \"3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241\" returns successfully" Jul 7 06:12:41.517723 systemd[1]: cri-containerd-3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241.scope: Deactivated successfully. Jul 7 06:12:41.519164 containerd[1923]: time="2025-07-07T06:12:41.519136004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241\" id:\"3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241\" pid:4109 exited_at:{seconds:1751868761 nanos:518795998}" Jul 7 06:12:41.519232 containerd[1923]: time="2025-07-07T06:12:41.519183040Z" level=info msg="received exit event container_id:\"3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241\" id:\"3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241\" pid:4109 exited_at:{seconds:1751868761 nanos:518795998}" Jul 7 06:12:41.534969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241-rootfs.mount: Deactivated successfully. Jul 7 06:12:41.824137 kubelet[3281]: E0707 06:12:41.824032 3281 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rgmw9" podUID="3b3041ea-8291-4b09-bfe8-d14c2bd2ab11" Jul 7 06:12:41.881110 kubelet[3281]: I0707 06:12:41.881018 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:12:42.891417 containerd[1923]: time="2025-07-07T06:12:42.891318053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:12:43.824230 kubelet[3281]: E0707 06:12:43.824092 3281 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rgmw9" podUID="3b3041ea-8291-4b09-bfe8-d14c2bd2ab11" Jul 7 06:12:45.244399 containerd[1923]: time="2025-07-07T06:12:45.244345679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:45.244608 containerd[1923]: time="2025-07-07T06:12:45.244572442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 06:12:45.244974 containerd[1923]: time="2025-07-07T06:12:45.244935636Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:45.245743 containerd[1923]: time="2025-07-07T06:12:45.245697712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:45.246123 containerd[1923]: time="2025-07-07T06:12:45.246081213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.354684666s" Jul 7 06:12:45.246123 containerd[1923]: time="2025-07-07T06:12:45.246097305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 06:12:45.247499 containerd[1923]: time="2025-07-07T06:12:45.247488477Z" level=info msg="CreateContainer within sandbox \"8522e6653572dfb4222a4a6a74dae9b922f4685b713fb033c4bf5a752c32386a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:12:45.250569 containerd[1923]: time="2025-07-07T06:12:45.250529006Z" level=info msg="Container a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:45.253869 containerd[1923]: time="2025-07-07T06:12:45.253828806Z" level=info msg="CreateContainer within sandbox \"8522e6653572dfb4222a4a6a74dae9b922f4685b713fb033c4bf5a752c32386a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b\"" Jul 7 06:12:45.254074 containerd[1923]: time="2025-07-07T06:12:45.254035307Z" level=info msg="StartContainer for \"a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b\"" Jul 7 06:12:45.254800 containerd[1923]: time="2025-07-07T06:12:45.254758218Z" level=info msg="connecting to shim a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b" address="unix:///run/containerd/s/e0d2a143668e637d0593fc67bc05e6445d4ddbbf3e69e065fb67aab0828817cf" protocol=ttrpc version=3 Jul 7 06:12:45.271810 systemd[1]: Started cri-containerd-a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b.scope - libcontainer container a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b. Jul 7 06:12:45.297204 containerd[1923]: time="2025-07-07T06:12:45.297145199Z" level=info msg="StartContainer for \"a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b\" returns successfully" Jul 7 06:12:45.823597 kubelet[3281]: E0707 06:12:45.823535 3281 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rgmw9" podUID="3b3041ea-8291-4b09-bfe8-d14c2bd2ab11" Jul 7 06:12:45.828478 containerd[1923]: time="2025-07-07T06:12:45.828457474Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:12:45.829332 systemd[1]: cri-containerd-a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b.scope: Deactivated successfully. Jul 7 06:12:45.829489 systemd[1]: cri-containerd-a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b.scope: Consumed 346ms CPU time, 193.5M memory peak, 171.2M written to disk. Jul 7 06:12:45.830160 containerd[1923]: time="2025-07-07T06:12:45.830146533Z" level=info msg="received exit event container_id:\"a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b\" id:\"a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b\" pid:4170 exited_at:{seconds:1751868765 nanos:830062003}" Jul 7 06:12:45.830213 containerd[1923]: time="2025-07-07T06:12:45.830201925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b\" id:\"a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b\" pid:4170 exited_at:{seconds:1751868765 nanos:830062003}" Jul 7 06:12:45.834128 kubelet[3281]: I0707 06:12:45.834115 3281 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:12:45.840850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b-rootfs.mount: Deactivated successfully. Jul 7 06:12:45.871996 systemd[1]: Created slice kubepods-besteffort-pode310f79c_0355_4c6a_bc79_ee486f94963f.slice - libcontainer container kubepods-besteffort-pode310f79c_0355_4c6a_bc79_ee486f94963f.slice. Jul 7 06:12:45.893939 kubelet[3281]: I0707 06:12:45.893889 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e310f79c-0355-4c6a-bc79-ee486f94963f-calico-apiserver-certs\") pod \"calico-apiserver-6cf9698c94-4q2rm\" (UID: \"e310f79c-0355-4c6a-bc79-ee486f94963f\") " pod="calico-apiserver/calico-apiserver-6cf9698c94-4q2rm" Jul 7 06:12:45.894137 kubelet[3281]: I0707 06:12:45.893953 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq6lh\" (UniqueName: \"kubernetes.io/projected/e310f79c-0355-4c6a-bc79-ee486f94963f-kube-api-access-dq6lh\") pod \"calico-apiserver-6cf9698c94-4q2rm\" (UID: \"e310f79c-0355-4c6a-bc79-ee486f94963f\") " pod="calico-apiserver/calico-apiserver-6cf9698c94-4q2rm" Jul 7 06:12:45.933970 systemd[1]: Created slice kubepods-besteffort-podde6e7e2a_0310_4167_ae5f_165f0cf836d6.slice - libcontainer container kubepods-besteffort-podde6e7e2a_0310_4167_ae5f_165f0cf836d6.slice. Jul 7 06:12:45.984943 systemd[1]: Created slice kubepods-besteffort-pod99b5b42d_97bc_4221_9141_472378a6b5d5.slice - libcontainer container kubepods-besteffort-pod99b5b42d_97bc_4221_9141_472378a6b5d5.slice. Jul 7 06:12:45.994912 kubelet[3281]: I0707 06:12:45.994882 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de6e7e2a-0310-4167-ae5f-165f0cf836d6-tigera-ca-bundle\") pod \"calico-kube-controllers-5c7f86ddf5-cjvwq\" (UID: \"de6e7e2a-0310-4167-ae5f-165f0cf836d6\") " pod="calico-system/calico-kube-controllers-5c7f86ddf5-cjvwq" Jul 7 06:12:46.011508 kubelet[3281]: I0707 06:12:45.994919 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv2wv\" (UniqueName: \"kubernetes.io/projected/de6e7e2a-0310-4167-ae5f-165f0cf836d6-kube-api-access-lv2wv\") pod \"calico-kube-controllers-5c7f86ddf5-cjvwq\" (UID: \"de6e7e2a-0310-4167-ae5f-165f0cf836d6\") " pod="calico-system/calico-kube-controllers-5c7f86ddf5-cjvwq" Jul 7 06:12:46.011508 kubelet[3281]: I0707 06:12:45.995139 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/99b5b42d-97bc-4221-9141-472378a6b5d5-calico-apiserver-certs\") pod \"calico-apiserver-6cf9698c94-kmd2s\" (UID: \"99b5b42d-97bc-4221-9141-472378a6b5d5\") " pod="calico-apiserver/calico-apiserver-6cf9698c94-kmd2s" Jul 7 06:12:46.011508 kubelet[3281]: I0707 06:12:45.995185 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6krz\" (UniqueName: \"kubernetes.io/projected/99b5b42d-97bc-4221-9141-472378a6b5d5-kube-api-access-l6krz\") pod \"calico-apiserver-6cf9698c94-kmd2s\" (UID: \"99b5b42d-97bc-4221-9141-472378a6b5d5\") " pod="calico-apiserver/calico-apiserver-6cf9698c94-kmd2s" Jul 7 06:12:46.025588 systemd[1]: Created slice kubepods-burstable-pod3b203fd7_1f2e_4ab1_b93c_16937892999c.slice - libcontainer container kubepods-burstable-pod3b203fd7_1f2e_4ab1_b93c_16937892999c.slice. Jul 7 06:12:46.042519 systemd[1]: Created slice kubepods-besteffort-pod556e2d87_7451_48af_b423_607a56d2006d.slice - libcontainer container kubepods-besteffort-pod556e2d87_7451_48af_b423_607a56d2006d.slice. Jul 7 06:12:46.081345 systemd[1]: Created slice kubepods-burstable-pod1d759a72_d0fc_47cd_8309_fee81138534e.slice - libcontainer container kubepods-burstable-pod1d759a72_d0fc_47cd_8309_fee81138534e.slice. Jul 7 06:12:46.095886 kubelet[3281]: I0707 06:12:46.095804 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzdjh\" (UniqueName: \"kubernetes.io/projected/1d759a72-d0fc-47cd-8309-fee81138534e-kube-api-access-nzdjh\") pod \"coredns-674b8bbfcf-fbsmv\" (UID: \"1d759a72-d0fc-47cd-8309-fee81138534e\") " pod="kube-system/coredns-674b8bbfcf-fbsmv" Jul 7 06:12:46.095886 kubelet[3281]: I0707 06:12:46.095866 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b203fd7-1f2e-4ab1-b93c-16937892999c-config-volume\") pod \"coredns-674b8bbfcf-nt2hm\" (UID: \"3b203fd7-1f2e-4ab1-b93c-16937892999c\") " pod="kube-system/coredns-674b8bbfcf-nt2hm" Jul 7 06:12:46.099976 kubelet[3281]: I0707 06:12:46.095927 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blxnx\" (UniqueName: \"kubernetes.io/projected/3b203fd7-1f2e-4ab1-b93c-16937892999c-kube-api-access-blxnx\") pod \"coredns-674b8bbfcf-nt2hm\" (UID: \"3b203fd7-1f2e-4ab1-b93c-16937892999c\") " pod="kube-system/coredns-674b8bbfcf-nt2hm" Jul 7 06:12:46.099976 kubelet[3281]: I0707 06:12:46.096011 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnr9h\" (UniqueName: \"kubernetes.io/projected/556e2d87-7451-48af-b423-607a56d2006d-kube-api-access-bnr9h\") pod \"whisker-77484bb64c-pxpqc\" (UID: \"556e2d87-7451-48af-b423-607a56d2006d\") " pod="calico-system/whisker-77484bb64c-pxpqc" Jul 7 06:12:46.099976 kubelet[3281]: I0707 06:12:46.096150 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/556e2d87-7451-48af-b423-607a56d2006d-whisker-backend-key-pair\") pod \"whisker-77484bb64c-pxpqc\" (UID: \"556e2d87-7451-48af-b423-607a56d2006d\") " pod="calico-system/whisker-77484bb64c-pxpqc" Jul 7 06:12:46.099976 kubelet[3281]: I0707 06:12:46.096331 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1d759a72-d0fc-47cd-8309-fee81138534e-config-volume\") pod \"coredns-674b8bbfcf-fbsmv\" (UID: \"1d759a72-d0fc-47cd-8309-fee81138534e\") " pod="kube-system/coredns-674b8bbfcf-fbsmv" Jul 7 06:12:46.099976 kubelet[3281]: I0707 06:12:46.096428 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/556e2d87-7451-48af-b423-607a56d2006d-whisker-ca-bundle\") pod \"whisker-77484bb64c-pxpqc\" (UID: \"556e2d87-7451-48af-b423-607a56d2006d\") " pod="calico-system/whisker-77484bb64c-pxpqc" Jul 7 06:12:46.134713 systemd[1]: Created slice kubepods-besteffort-podf039077b_1995_4908_9f12_7dced13a4d6a.slice - libcontainer container kubepods-besteffort-podf039077b_1995_4908_9f12_7dced13a4d6a.slice. Jul 7 06:12:46.175021 containerd[1923]: time="2025-07-07T06:12:46.174943938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cf9698c94-4q2rm,Uid:e310f79c-0355-4c6a-bc79-ee486f94963f,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:12:46.197488 kubelet[3281]: I0707 06:12:46.197464 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f039077b-1995-4908-9f12-7dced13a4d6a-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-fpfpz\" (UID: \"f039077b-1995-4908-9f12-7dced13a4d6a\") " pod="calico-system/goldmane-768f4c5c69-fpfpz" Jul 7 06:12:46.197488 kubelet[3281]: I0707 06:12:46.197490 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f039077b-1995-4908-9f12-7dced13a4d6a-goldmane-key-pair\") pod \"goldmane-768f4c5c69-fpfpz\" (UID: \"f039077b-1995-4908-9f12-7dced13a4d6a\") " pod="calico-system/goldmane-768f4c5c69-fpfpz" Jul 7 06:12:46.197638 kubelet[3281]: I0707 06:12:46.197503 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f039077b-1995-4908-9f12-7dced13a4d6a-config\") pod \"goldmane-768f4c5c69-fpfpz\" (UID: \"f039077b-1995-4908-9f12-7dced13a4d6a\") " pod="calico-system/goldmane-768f4c5c69-fpfpz" Jul 7 06:12:46.197638 kubelet[3281]: I0707 06:12:46.197552 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jx7l\" (UniqueName: \"kubernetes.io/projected/f039077b-1995-4908-9f12-7dced13a4d6a-kube-api-access-9jx7l\") pod \"goldmane-768f4c5c69-fpfpz\" (UID: \"f039077b-1995-4908-9f12-7dced13a4d6a\") " pod="calico-system/goldmane-768f4c5c69-fpfpz" Jul 7 06:12:46.219855 containerd[1923]: time="2025-07-07T06:12:46.219801407Z" level=error msg="Failed to destroy network for sandbox \"a73bea329b6f9000d7c15a8b381e0814db3cab5bee74020ac1e58b4ca27979cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.220345 containerd[1923]: time="2025-07-07T06:12:46.220326563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cf9698c94-4q2rm,Uid:e310f79c-0355-4c6a-bc79-ee486f94963f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a73bea329b6f9000d7c15a8b381e0814db3cab5bee74020ac1e58b4ca27979cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.220514 kubelet[3281]: E0707 06:12:46.220487 3281 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a73bea329b6f9000d7c15a8b381e0814db3cab5bee74020ac1e58b4ca27979cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.220546 kubelet[3281]: E0707 06:12:46.220538 3281 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a73bea329b6f9000d7c15a8b381e0814db3cab5bee74020ac1e58b4ca27979cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cf9698c94-4q2rm" Jul 7 06:12:46.220573 kubelet[3281]: E0707 06:12:46.220552 3281 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a73bea329b6f9000d7c15a8b381e0814db3cab5bee74020ac1e58b4ca27979cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cf9698c94-4q2rm" Jul 7 06:12:46.220605 kubelet[3281]: E0707 06:12:46.220591 3281 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cf9698c94-4q2rm_calico-apiserver(e310f79c-0355-4c6a-bc79-ee486f94963f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cf9698c94-4q2rm_calico-apiserver(e310f79c-0355-4c6a-bc79-ee486f94963f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a73bea329b6f9000d7c15a8b381e0814db3cab5bee74020ac1e58b4ca27979cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cf9698c94-4q2rm" podUID="e310f79c-0355-4c6a-bc79-ee486f94963f" Jul 7 06:12:46.254995 containerd[1923]: time="2025-07-07T06:12:46.254878408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7f86ddf5-cjvwq,Uid:de6e7e2a-0310-4167-ae5f-165f0cf836d6,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:46.278633 containerd[1923]: time="2025-07-07T06:12:46.278581811Z" level=error msg="Failed to destroy network for sandbox \"d10b51ae078a01b3236fe9444a729139a1ba2c1a1e41e9ce76db5ef358e50e29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.279122 containerd[1923]: time="2025-07-07T06:12:46.279074428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7f86ddf5-cjvwq,Uid:de6e7e2a-0310-4167-ae5f-165f0cf836d6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d10b51ae078a01b3236fe9444a729139a1ba2c1a1e41e9ce76db5ef358e50e29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.279279 kubelet[3281]: E0707 06:12:46.279224 3281 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d10b51ae078a01b3236fe9444a729139a1ba2c1a1e41e9ce76db5ef358e50e29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.279279 kubelet[3281]: E0707 06:12:46.279267 3281 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d10b51ae078a01b3236fe9444a729139a1ba2c1a1e41e9ce76db5ef358e50e29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c7f86ddf5-cjvwq" Jul 7 06:12:46.279337 kubelet[3281]: E0707 06:12:46.279280 3281 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d10b51ae078a01b3236fe9444a729139a1ba2c1a1e41e9ce76db5ef358e50e29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c7f86ddf5-cjvwq" Jul 7 06:12:46.279337 kubelet[3281]: E0707 06:12:46.279318 3281 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c7f86ddf5-cjvwq_calico-system(de6e7e2a-0310-4167-ae5f-165f0cf836d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c7f86ddf5-cjvwq_calico-system(de6e7e2a-0310-4167-ae5f-165f0cf836d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d10b51ae078a01b3236fe9444a729139a1ba2c1a1e41e9ce76db5ef358e50e29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c7f86ddf5-cjvwq" podUID="de6e7e2a-0310-4167-ae5f-165f0cf836d6" Jul 7 06:12:46.279993 systemd[1]: run-netns-cni\x2d17aef5a3\x2d396c\x2d193f\x2d0559\x2df7bf84c5830b.mount: Deactivated successfully. Jul 7 06:12:46.290822 containerd[1923]: time="2025-07-07T06:12:46.290776484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cf9698c94-kmd2s,Uid:99b5b42d-97bc-4221-9141-472378a6b5d5,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:12:46.315406 containerd[1923]: time="2025-07-07T06:12:46.315354613Z" level=error msg="Failed to destroy network for sandbox \"5168191f4d7937c875a525f28705883c3bd312ab7c83730e1d6a047e3a25ae2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.315795 containerd[1923]: time="2025-07-07T06:12:46.315752015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cf9698c94-kmd2s,Uid:99b5b42d-97bc-4221-9141-472378a6b5d5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5168191f4d7937c875a525f28705883c3bd312ab7c83730e1d6a047e3a25ae2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.315903 kubelet[3281]: E0707 06:12:46.315877 3281 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5168191f4d7937c875a525f28705883c3bd312ab7c83730e1d6a047e3a25ae2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.315952 kubelet[3281]: E0707 06:12:46.315922 3281 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5168191f4d7937c875a525f28705883c3bd312ab7c83730e1d6a047e3a25ae2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cf9698c94-kmd2s" Jul 7 06:12:46.315952 kubelet[3281]: E0707 06:12:46.315942 3281 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5168191f4d7937c875a525f28705883c3bd312ab7c83730e1d6a047e3a25ae2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cf9698c94-kmd2s" Jul 7 06:12:46.316030 kubelet[3281]: E0707 06:12:46.316013 3281 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cf9698c94-kmd2s_calico-apiserver(99b5b42d-97bc-4221-9141-472378a6b5d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cf9698c94-kmd2s_calico-apiserver(99b5b42d-97bc-4221-9141-472378a6b5d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5168191f4d7937c875a525f28705883c3bd312ab7c83730e1d6a047e3a25ae2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cf9698c94-kmd2s" podUID="99b5b42d-97bc-4221-9141-472378a6b5d5" Jul 7 06:12:46.336787 containerd[1923]: time="2025-07-07T06:12:46.336742081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nt2hm,Uid:3b203fd7-1f2e-4ab1-b93c-16937892999c,Namespace:kube-system,Attempt:0,}" Jul 7 06:12:46.354450 containerd[1923]: time="2025-07-07T06:12:46.354426713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77484bb64c-pxpqc,Uid:556e2d87-7451-48af-b423-607a56d2006d,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:46.360084 containerd[1923]: time="2025-07-07T06:12:46.360019653Z" level=error msg="Failed to destroy network for sandbox \"93395206eed4ef7c9d3f043ba721cae6f4206d0da611826733542f084fc69a86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.360484 containerd[1923]: time="2025-07-07T06:12:46.360466628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nt2hm,Uid:3b203fd7-1f2e-4ab1-b93c-16937892999c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"93395206eed4ef7c9d3f043ba721cae6f4206d0da611826733542f084fc69a86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.360674 kubelet[3281]: E0707 06:12:46.360627 3281 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93395206eed4ef7c9d3f043ba721cae6f4206d0da611826733542f084fc69a86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.360729 kubelet[3281]: E0707 06:12:46.360707 3281 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93395206eed4ef7c9d3f043ba721cae6f4206d0da611826733542f084fc69a86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nt2hm" Jul 7 06:12:46.360729 kubelet[3281]: E0707 06:12:46.360724 3281 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93395206eed4ef7c9d3f043ba721cae6f4206d0da611826733542f084fc69a86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nt2hm" Jul 7 06:12:46.360792 kubelet[3281]: E0707 06:12:46.360766 3281 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nt2hm_kube-system(3b203fd7-1f2e-4ab1-b93c-16937892999c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nt2hm_kube-system(3b203fd7-1f2e-4ab1-b93c-16937892999c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93395206eed4ef7c9d3f043ba721cae6f4206d0da611826733542f084fc69a86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nt2hm" podUID="3b203fd7-1f2e-4ab1-b93c-16937892999c" Jul 7 06:12:46.377843 containerd[1923]: time="2025-07-07T06:12:46.377786836Z" level=error msg="Failed to destroy network for sandbox \"0ff5afe3e29c9b794a909a97f77f8ad2b6ad8d0cd0f4489683e8e4e477062dcc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.378315 containerd[1923]: time="2025-07-07T06:12:46.378297571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77484bb64c-pxpqc,Uid:556e2d87-7451-48af-b423-607a56d2006d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ff5afe3e29c9b794a909a97f77f8ad2b6ad8d0cd0f4489683e8e4e477062dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.378493 kubelet[3281]: E0707 06:12:46.378470 3281 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ff5afe3e29c9b794a909a97f77f8ad2b6ad8d0cd0f4489683e8e4e477062dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.378531 kubelet[3281]: E0707 06:12:46.378512 3281 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ff5afe3e29c9b794a909a97f77f8ad2b6ad8d0cd0f4489683e8e4e477062dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77484bb64c-pxpqc" Jul 7 06:12:46.378531 kubelet[3281]: E0707 06:12:46.378526 3281 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ff5afe3e29c9b794a909a97f77f8ad2b6ad8d0cd0f4489683e8e4e477062dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77484bb64c-pxpqc" Jul 7 06:12:46.378573 kubelet[3281]: E0707 06:12:46.378559 3281 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77484bb64c-pxpqc_calico-system(556e2d87-7451-48af-b423-607a56d2006d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77484bb64c-pxpqc_calico-system(556e2d87-7451-48af-b423-607a56d2006d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ff5afe3e29c9b794a909a97f77f8ad2b6ad8d0cd0f4489683e8e4e477062dcc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77484bb64c-pxpqc" podUID="556e2d87-7451-48af-b423-607a56d2006d" Jul 7 06:12:46.385078 containerd[1923]: time="2025-07-07T06:12:46.385065511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fbsmv,Uid:1d759a72-d0fc-47cd-8309-fee81138534e,Namespace:kube-system,Attempt:0,}" Jul 7 06:12:46.408633 containerd[1923]: time="2025-07-07T06:12:46.408567277Z" level=error msg="Failed to destroy network for sandbox \"358ed7c0ee562f10d6b605d5470e52b4ff2052fc12a3acccf4d932e036815a86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.409129 containerd[1923]: time="2025-07-07T06:12:46.409062359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fbsmv,Uid:1d759a72-d0fc-47cd-8309-fee81138534e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"358ed7c0ee562f10d6b605d5470e52b4ff2052fc12a3acccf4d932e036815a86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.409257 kubelet[3281]: E0707 06:12:46.409203 3281 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"358ed7c0ee562f10d6b605d5470e52b4ff2052fc12a3acccf4d932e036815a86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.409257 kubelet[3281]: E0707 06:12:46.409246 3281 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"358ed7c0ee562f10d6b605d5470e52b4ff2052fc12a3acccf4d932e036815a86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fbsmv" Jul 7 06:12:46.409317 kubelet[3281]: E0707 06:12:46.409259 3281 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"358ed7c0ee562f10d6b605d5470e52b4ff2052fc12a3acccf4d932e036815a86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fbsmv" Jul 7 06:12:46.409317 kubelet[3281]: E0707 06:12:46.409291 3281 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fbsmv_kube-system(1d759a72-d0fc-47cd-8309-fee81138534e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fbsmv_kube-system(1d759a72-d0fc-47cd-8309-fee81138534e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"358ed7c0ee562f10d6b605d5470e52b4ff2052fc12a3acccf4d932e036815a86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fbsmv" podUID="1d759a72-d0fc-47cd-8309-fee81138534e" Jul 7 06:12:46.441156 containerd[1923]: time="2025-07-07T06:12:46.441053707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fpfpz,Uid:f039077b-1995-4908-9f12-7dced13a4d6a,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:46.465355 containerd[1923]: time="2025-07-07T06:12:46.465301406Z" level=error msg="Failed to destroy network for sandbox \"f2c253302082d3419a353f9bade300e3f376f602007f354efca712f6c2204c23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.465815 containerd[1923]: time="2025-07-07T06:12:46.465798466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fpfpz,Uid:f039077b-1995-4908-9f12-7dced13a4d6a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2c253302082d3419a353f9bade300e3f376f602007f354efca712f6c2204c23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.465957 kubelet[3281]: E0707 06:12:46.465935 3281 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2c253302082d3419a353f9bade300e3f376f602007f354efca712f6c2204c23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:46.465992 kubelet[3281]: E0707 06:12:46.465974 3281 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2c253302082d3419a353f9bade300e3f376f602007f354efca712f6c2204c23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-fpfpz" Jul 7 06:12:46.466011 kubelet[3281]: E0707 06:12:46.465991 3281 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2c253302082d3419a353f9bade300e3f376f602007f354efca712f6c2204c23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-fpfpz" Jul 7 06:12:46.466034 kubelet[3281]: E0707 06:12:46.466021 3281 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-fpfpz_calico-system(f039077b-1995-4908-9f12-7dced13a4d6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-fpfpz_calico-system(f039077b-1995-4908-9f12-7dced13a4d6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2c253302082d3419a353f9bade300e3f376f602007f354efca712f6c2204c23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-fpfpz" podUID="f039077b-1995-4908-9f12-7dced13a4d6a" Jul 7 06:12:46.908894 containerd[1923]: time="2025-07-07T06:12:46.908786724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:12:47.257072 systemd[1]: run-netns-cni\x2d7cb5b4cb\x2d317d\x2d76ea\x2de74d\x2d84c168d8e887.mount: Deactivated successfully. Jul 7 06:12:47.839005 systemd[1]: Created slice kubepods-besteffort-pod3b3041ea_8291_4b09_bfe8_d14c2bd2ab11.slice - libcontainer container kubepods-besteffort-pod3b3041ea_8291_4b09_bfe8_d14c2bd2ab11.slice. Jul 7 06:12:47.845147 containerd[1923]: time="2025-07-07T06:12:47.845060545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rgmw9,Uid:3b3041ea-8291-4b09-bfe8-d14c2bd2ab11,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:47.871742 containerd[1923]: time="2025-07-07T06:12:47.871714389Z" level=error msg="Failed to destroy network for sandbox \"d940340115e9131b9fabcb072f535937cd86a04c499a807a27505f5d97237904\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:47.872211 containerd[1923]: time="2025-07-07T06:12:47.872192215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rgmw9,Uid:3b3041ea-8291-4b09-bfe8-d14c2bd2ab11,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d940340115e9131b9fabcb072f535937cd86a04c499a807a27505f5d97237904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:47.872435 kubelet[3281]: E0707 06:12:47.872399 3281 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d940340115e9131b9fabcb072f535937cd86a04c499a807a27505f5d97237904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:12:47.872601 kubelet[3281]: E0707 06:12:47.872457 3281 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d940340115e9131b9fabcb072f535937cd86a04c499a807a27505f5d97237904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rgmw9" Jul 7 06:12:47.872601 kubelet[3281]: E0707 06:12:47.872470 3281 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d940340115e9131b9fabcb072f535937cd86a04c499a807a27505f5d97237904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rgmw9" Jul 7 06:12:47.872601 kubelet[3281]: E0707 06:12:47.872521 3281 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rgmw9_calico-system(3b3041ea-8291-4b09-bfe8-d14c2bd2ab11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rgmw9_calico-system(3b3041ea-8291-4b09-bfe8-d14c2bd2ab11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d940340115e9131b9fabcb072f535937cd86a04c499a807a27505f5d97237904\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rgmw9" podUID="3b3041ea-8291-4b09-bfe8-d14c2bd2ab11" Jul 7 06:12:47.873460 systemd[1]: run-netns-cni\x2d4577d7c9\x2d8ecd\x2d6762\x2d10fe\x2da766a6c607d0.mount: Deactivated successfully. Jul 7 06:12:50.313431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3453175511.mount: Deactivated successfully. Jul 7 06:12:50.334754 containerd[1923]: time="2025-07-07T06:12:50.334731311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:50.334974 containerd[1923]: time="2025-07-07T06:12:50.334961089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 06:12:50.335288 containerd[1923]: time="2025-07-07T06:12:50.335276222Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:50.336044 containerd[1923]: time="2025-07-07T06:12:50.336033945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:50.336343 containerd[1923]: time="2025-07-07T06:12:50.336330734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 3.427461196s" Jul 7 06:12:50.336367 containerd[1923]: time="2025-07-07T06:12:50.336347261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 06:12:50.340343 containerd[1923]: time="2025-07-07T06:12:50.340300184Z" level=info msg="CreateContainer within sandbox \"8522e6653572dfb4222a4a6a74dae9b922f4685b713fb033c4bf5a752c32386a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:12:50.343798 containerd[1923]: time="2025-07-07T06:12:50.343785718Z" level=info msg="Container caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:50.347508 containerd[1923]: time="2025-07-07T06:12:50.347495041Z" level=info msg="CreateContainer within sandbox \"8522e6653572dfb4222a4a6a74dae9b922f4685b713fb033c4bf5a752c32386a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\"" Jul 7 06:12:50.347774 containerd[1923]: time="2025-07-07T06:12:50.347761774Z" level=info msg="StartContainer for \"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\"" Jul 7 06:12:50.348511 containerd[1923]: time="2025-07-07T06:12:50.348497762Z" level=info msg="connecting to shim caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f" address="unix:///run/containerd/s/e0d2a143668e637d0593fc67bc05e6445d4ddbbf3e69e065fb67aab0828817cf" protocol=ttrpc version=3 Jul 7 06:12:50.363834 systemd[1]: Started cri-containerd-caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f.scope - libcontainer container caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f. Jul 7 06:12:50.384127 containerd[1923]: time="2025-07-07T06:12:50.384076152Z" level=info msg="StartContainer for \"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" returns successfully" Jul 7 06:12:50.443815 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:12:50.443872 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:12:50.629660 kubelet[3281]: I0707 06:12:50.629590 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/556e2d87-7451-48af-b423-607a56d2006d-whisker-backend-key-pair\") pod \"556e2d87-7451-48af-b423-607a56d2006d\" (UID: \"556e2d87-7451-48af-b423-607a56d2006d\") " Jul 7 06:12:50.629660 kubelet[3281]: I0707 06:12:50.629613 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/556e2d87-7451-48af-b423-607a56d2006d-whisker-ca-bundle\") pod \"556e2d87-7451-48af-b423-607a56d2006d\" (UID: \"556e2d87-7451-48af-b423-607a56d2006d\") " Jul 7 06:12:50.629660 kubelet[3281]: I0707 06:12:50.629640 3281 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnr9h\" (UniqueName: \"kubernetes.io/projected/556e2d87-7451-48af-b423-607a56d2006d-kube-api-access-bnr9h\") pod \"556e2d87-7451-48af-b423-607a56d2006d\" (UID: \"556e2d87-7451-48af-b423-607a56d2006d\") " Jul 7 06:12:50.629915 kubelet[3281]: I0707 06:12:50.629824 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/556e2d87-7451-48af-b423-607a56d2006d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "556e2d87-7451-48af-b423-607a56d2006d" (UID: "556e2d87-7451-48af-b423-607a56d2006d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:12:50.631035 kubelet[3281]: I0707 06:12:50.630994 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/556e2d87-7451-48af-b423-607a56d2006d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "556e2d87-7451-48af-b423-607a56d2006d" (UID: "556e2d87-7451-48af-b423-607a56d2006d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:12:50.631035 kubelet[3281]: I0707 06:12:50.631003 3281 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/556e2d87-7451-48af-b423-607a56d2006d-kube-api-access-bnr9h" (OuterVolumeSpecName: "kube-api-access-bnr9h") pod "556e2d87-7451-48af-b423-607a56d2006d" (UID: "556e2d87-7451-48af-b423-607a56d2006d"). InnerVolumeSpecName "kube-api-access-bnr9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:12:50.731132 kubelet[3281]: I0707 06:12:50.731003 3281 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bnr9h\" (UniqueName: \"kubernetes.io/projected/556e2d87-7451-48af-b423-607a56d2006d-kube-api-access-bnr9h\") on node \"ci-4372.0.1-a-2cf65e3e62\" DevicePath \"\"" Jul 7 06:12:50.731132 kubelet[3281]: I0707 06:12:50.731069 3281 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/556e2d87-7451-48af-b423-607a56d2006d-whisker-backend-key-pair\") on node \"ci-4372.0.1-a-2cf65e3e62\" DevicePath \"\"" Jul 7 06:12:50.731132 kubelet[3281]: I0707 06:12:50.731096 3281 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/556e2d87-7451-48af-b423-607a56d2006d-whisker-ca-bundle\") on node \"ci-4372.0.1-a-2cf65e3e62\" DevicePath \"\"" Jul 7 06:12:50.934714 systemd[1]: Removed slice kubepods-besteffort-pod556e2d87_7451_48af_b423_607a56d2006d.slice - libcontainer container kubepods-besteffort-pod556e2d87_7451_48af_b423_607a56d2006d.slice. Jul 7 06:12:50.955606 kubelet[3281]: I0707 06:12:50.955447 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j5cn9" podStartSLOduration=1.8748237410000002 podStartE2EDuration="13.955399173s" podCreationTimestamp="2025-07-07 06:12:37 +0000 UTC" firstStartedPulling="2025-07-07 06:12:38.256060418 +0000 UTC m=+16.483034209" lastFinishedPulling="2025-07-07 06:12:50.336635849 +0000 UTC m=+28.563609641" observedRunningTime="2025-07-07 06:12:50.954696947 +0000 UTC m=+29.181670821" watchObservedRunningTime="2025-07-07 06:12:50.955399173 +0000 UTC m=+29.182373052" Jul 7 06:12:51.017544 systemd[1]: Created slice kubepods-besteffort-pod36fc6d74_e99a_4958_ac00_625c68402316.slice - libcontainer container kubepods-besteffort-pod36fc6d74_e99a_4958_ac00_625c68402316.slice. Jul 7 06:12:51.033979 kubelet[3281]: I0707 06:12:51.033870 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/36fc6d74-e99a-4958-ac00-625c68402316-whisker-backend-key-pair\") pod \"whisker-7bd7997cf7-6nmfj\" (UID: \"36fc6d74-e99a-4958-ac00-625c68402316\") " pod="calico-system/whisker-7bd7997cf7-6nmfj" Jul 7 06:12:51.034356 kubelet[3281]: I0707 06:12:51.034250 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36fc6d74-e99a-4958-ac00-625c68402316-whisker-ca-bundle\") pod \"whisker-7bd7997cf7-6nmfj\" (UID: \"36fc6d74-e99a-4958-ac00-625c68402316\") " pod="calico-system/whisker-7bd7997cf7-6nmfj" Jul 7 06:12:51.034550 kubelet[3281]: I0707 06:12:51.034380 3281 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv4tv\" (UniqueName: \"kubernetes.io/projected/36fc6d74-e99a-4958-ac00-625c68402316-kube-api-access-lv4tv\") pod \"whisker-7bd7997cf7-6nmfj\" (UID: \"36fc6d74-e99a-4958-ac00-625c68402316\") " pod="calico-system/whisker-7bd7997cf7-6nmfj" Jul 7 06:12:51.319493 systemd[1]: var-lib-kubelet-pods-556e2d87\x2d7451\x2d48af\x2db423\x2d607a56d2006d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbnr9h.mount: Deactivated successfully. Jul 7 06:12:51.319563 systemd[1]: var-lib-kubelet-pods-556e2d87\x2d7451\x2d48af\x2db423\x2d607a56d2006d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:12:51.321021 containerd[1923]: time="2025-07-07T06:12:51.320963182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd7997cf7-6nmfj,Uid:36fc6d74-e99a-4958-ac00-625c68402316,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:51.377957 systemd-networkd[1840]: calicf357f7647e: Link UP Jul 7 06:12:51.378187 systemd-networkd[1840]: calicf357f7647e: Gained carrier Jul 7 06:12:51.386013 containerd[1923]: 2025-07-07 06:12:51.332 [INFO][4663] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:12:51.386013 containerd[1923]: 2025-07-07 06:12:51.340 [INFO][4663] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--2cf65e3e62-k8s-whisker--7bd7997cf7--6nmfj-eth0 whisker-7bd7997cf7- calico-system 36fc6d74-e99a-4958-ac00-625c68402316 858 0 2025-07-07 06:12:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bd7997cf7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.0.1-a-2cf65e3e62 whisker-7bd7997cf7-6nmfj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicf357f7647e [] [] }} ContainerID="5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" Namespace="calico-system" Pod="whisker-7bd7997cf7-6nmfj" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-whisker--7bd7997cf7--6nmfj-" Jul 7 06:12:51.386013 containerd[1923]: 2025-07-07 06:12:51.340 [INFO][4663] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" Namespace="calico-system" Pod="whisker-7bd7997cf7-6nmfj" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-whisker--7bd7997cf7--6nmfj-eth0" Jul 7 06:12:51.386013 containerd[1923]: 2025-07-07 06:12:51.353 [INFO][4684] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" HandleID="k8s-pod-network.5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-whisker--7bd7997cf7--6nmfj-eth0" Jul 7 06:12:51.386462 containerd[1923]: 2025-07-07 06:12:51.353 [INFO][4684] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" HandleID="k8s-pod-network.5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-whisker--7bd7997cf7--6nmfj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-2cf65e3e62", "pod":"whisker-7bd7997cf7-6nmfj", "timestamp":"2025-07-07 06:12:51.353576727 +0000 UTC"}, Hostname:"ci-4372.0.1-a-2cf65e3e62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:12:51.386462 containerd[1923]: 2025-07-07 06:12:51.353 [INFO][4684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:12:51.386462 containerd[1923]: 2025-07-07 06:12:51.353 [INFO][4684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:12:51.386462 containerd[1923]: 2025-07-07 06:12:51.353 [INFO][4684] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-2cf65e3e62' Jul 7 06:12:51.386462 containerd[1923]: 2025-07-07 06:12:51.358 [INFO][4684] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:51.386462 containerd[1923]: 2025-07-07 06:12:51.361 [INFO][4684] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:51.386462 containerd[1923]: 2025-07-07 06:12:51.363 [INFO][4684] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:51.386462 containerd[1923]: 2025-07-07 06:12:51.364 [INFO][4684] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:51.386462 containerd[1923]: 2025-07-07 06:12:51.365 [INFO][4684] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:51.386694 containerd[1923]: 2025-07-07 06:12:51.365 [INFO][4684] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:51.386694 containerd[1923]: 2025-07-07 06:12:51.366 [INFO][4684] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c Jul 7 06:12:51.386694 containerd[1923]: 2025-07-07 06:12:51.368 [INFO][4684] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:51.386694 containerd[1923]: 2025-07-07 06:12:51.371 [INFO][4684] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.129/26] block=192.168.47.128/26 handle="k8s-pod-network.5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:51.386694 containerd[1923]: 2025-07-07 06:12:51.371 [INFO][4684] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.129/26] handle="k8s-pod-network.5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:51.386694 containerd[1923]: 2025-07-07 06:12:51.371 [INFO][4684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:12:51.386694 containerd[1923]: 2025-07-07 06:12:51.371 [INFO][4684] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.129/26] IPv6=[] ContainerID="5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" HandleID="k8s-pod-network.5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-whisker--7bd7997cf7--6nmfj-eth0" Jul 7 06:12:51.386852 containerd[1923]: 2025-07-07 06:12:51.373 [INFO][4663] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" Namespace="calico-system" Pod="whisker-7bd7997cf7-6nmfj" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-whisker--7bd7997cf7--6nmfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-whisker--7bd7997cf7--6nmfj-eth0", GenerateName:"whisker-7bd7997cf7-", Namespace:"calico-system", SelfLink:"", UID:"36fc6d74-e99a-4958-ac00-625c68402316", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bd7997cf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"", Pod:"whisker-7bd7997cf7-6nmfj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicf357f7647e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:12:51.386852 containerd[1923]: 2025-07-07 06:12:51.373 [INFO][4663] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.129/32] ContainerID="5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" Namespace="calico-system" Pod="whisker-7bd7997cf7-6nmfj" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-whisker--7bd7997cf7--6nmfj-eth0" Jul 7 06:12:51.386931 containerd[1923]: 2025-07-07 06:12:51.373 [INFO][4663] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf357f7647e ContainerID="5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" Namespace="calico-system" Pod="whisker-7bd7997cf7-6nmfj" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-whisker--7bd7997cf7--6nmfj-eth0" Jul 7 06:12:51.386931 containerd[1923]: 2025-07-07 06:12:51.378 [INFO][4663] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" Namespace="calico-system" Pod="whisker-7bd7997cf7-6nmfj" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-whisker--7bd7997cf7--6nmfj-eth0" Jul 7 06:12:51.386980 containerd[1923]: 2025-07-07 06:12:51.379 [INFO][4663] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" Namespace="calico-system" Pod="whisker-7bd7997cf7-6nmfj" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-whisker--7bd7997cf7--6nmfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-whisker--7bd7997cf7--6nmfj-eth0", GenerateName:"whisker-7bd7997cf7-", Namespace:"calico-system", SelfLink:"", UID:"36fc6d74-e99a-4958-ac00-625c68402316", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bd7997cf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c", Pod:"whisker-7bd7997cf7-6nmfj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicf357f7647e", MAC:"8a:3a:1e:d0:d8:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:12:51.387038 containerd[1923]: 2025-07-07 06:12:51.384 [INFO][4663] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" Namespace="calico-system" Pod="whisker-7bd7997cf7-6nmfj" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-whisker--7bd7997cf7--6nmfj-eth0" Jul 7 06:12:51.405723 containerd[1923]: time="2025-07-07T06:12:51.405696980Z" level=info msg="connecting to shim 5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c" address="unix:///run/containerd/s/f9598b5e629bd82c201b2ba4ba76d0744bb91922510138c9f7b6a2f2f415910d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:51.422921 systemd[1]: Started cri-containerd-5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c.scope - libcontainer container 5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c. Jul 7 06:12:51.453325 containerd[1923]: time="2025-07-07T06:12:51.453273079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd7997cf7-6nmfj,Uid:36fc6d74-e99a-4958-ac00-625c68402316,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c\"" Jul 7 06:12:51.453943 containerd[1923]: time="2025-07-07T06:12:51.453929854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:12:51.829732 kubelet[3281]: I0707 06:12:51.829630 3281 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="556e2d87-7451-48af-b423-607a56d2006d" path="/var/lib/kubelet/pods/556e2d87-7451-48af-b423-607a56d2006d/volumes" Jul 7 06:12:51.928465 kubelet[3281]: I0707 06:12:51.928421 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:12:52.913004 containerd[1923]: time="2025-07-07T06:12:52.912972948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:52.913284 containerd[1923]: time="2025-07-07T06:12:52.913256278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 06:12:52.913647 containerd[1923]: time="2025-07-07T06:12:52.913634679Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:52.914456 containerd[1923]: time="2025-07-07T06:12:52.914444205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:52.914848 containerd[1923]: time="2025-07-07T06:12:52.914834808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.460889523s" Jul 7 06:12:52.914880 containerd[1923]: time="2025-07-07T06:12:52.914850448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 06:12:52.916324 containerd[1923]: time="2025-07-07T06:12:52.916311206Z" level=info msg="CreateContainer within sandbox \"5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:12:52.918333 containerd[1923]: time="2025-07-07T06:12:52.918290398Z" level=info msg="Container c7655120ab8e8f32b8869acc3da335c406b987dc8dd82ac9b0729653bfec20af: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:52.921836 containerd[1923]: time="2025-07-07T06:12:52.921793785Z" level=info msg="CreateContainer within sandbox \"5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c7655120ab8e8f32b8869acc3da335c406b987dc8dd82ac9b0729653bfec20af\"" Jul 7 06:12:52.922123 containerd[1923]: time="2025-07-07T06:12:52.922080943Z" level=info msg="StartContainer for \"c7655120ab8e8f32b8869acc3da335c406b987dc8dd82ac9b0729653bfec20af\"" Jul 7 06:12:52.922782 containerd[1923]: time="2025-07-07T06:12:52.922752954Z" level=info msg="connecting to shim c7655120ab8e8f32b8869acc3da335c406b987dc8dd82ac9b0729653bfec20af" address="unix:///run/containerd/s/f9598b5e629bd82c201b2ba4ba76d0744bb91922510138c9f7b6a2f2f415910d" protocol=ttrpc version=3 Jul 7 06:12:52.950906 systemd[1]: Started cri-containerd-c7655120ab8e8f32b8869acc3da335c406b987dc8dd82ac9b0729653bfec20af.scope - libcontainer container c7655120ab8e8f32b8869acc3da335c406b987dc8dd82ac9b0729653bfec20af. Jul 7 06:12:52.972799 systemd-networkd[1840]: calicf357f7647e: Gained IPv6LL Jul 7 06:12:52.982869 containerd[1923]: time="2025-07-07T06:12:52.982841650Z" level=info msg="StartContainer for \"c7655120ab8e8f32b8869acc3da335c406b987dc8dd82ac9b0729653bfec20af\" returns successfully" Jul 7 06:12:52.983522 containerd[1923]: time="2025-07-07T06:12:52.983507031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:12:54.827667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1801651028.mount: Deactivated successfully. Jul 7 06:12:54.832247 containerd[1923]: time="2025-07-07T06:12:54.832230365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:54.832515 containerd[1923]: time="2025-07-07T06:12:54.832499904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 06:12:54.832864 containerd[1923]: time="2025-07-07T06:12:54.832854020Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:54.833790 containerd[1923]: time="2025-07-07T06:12:54.833779929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:54.834182 containerd[1923]: time="2025-07-07T06:12:54.834171845Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.850647794s" Jul 7 06:12:54.834209 containerd[1923]: time="2025-07-07T06:12:54.834186499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 06:12:54.835586 containerd[1923]: time="2025-07-07T06:12:54.835574989Z" level=info msg="CreateContainer within sandbox \"5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:12:54.837974 containerd[1923]: time="2025-07-07T06:12:54.837939150Z" level=info msg="Container 791a5abeb81a894e1206d8e7387b5e73c667c4d581302c402ea1f1aa6bb4fb9d: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:54.840705 containerd[1923]: time="2025-07-07T06:12:54.840653919Z" level=info msg="CreateContainer within sandbox \"5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"791a5abeb81a894e1206d8e7387b5e73c667c4d581302c402ea1f1aa6bb4fb9d\"" Jul 7 06:12:54.840911 containerd[1923]: time="2025-07-07T06:12:54.840897833Z" level=info msg="StartContainer for \"791a5abeb81a894e1206d8e7387b5e73c667c4d581302c402ea1f1aa6bb4fb9d\"" Jul 7 06:12:54.841421 containerd[1923]: time="2025-07-07T06:12:54.841411400Z" level=info msg="connecting to shim 791a5abeb81a894e1206d8e7387b5e73c667c4d581302c402ea1f1aa6bb4fb9d" address="unix:///run/containerd/s/f9598b5e629bd82c201b2ba4ba76d0744bb91922510138c9f7b6a2f2f415910d" protocol=ttrpc version=3 Jul 7 06:12:54.859130 systemd[1]: Started cri-containerd-791a5abeb81a894e1206d8e7387b5e73c667c4d581302c402ea1f1aa6bb4fb9d.scope - libcontainer container 791a5abeb81a894e1206d8e7387b5e73c667c4d581302c402ea1f1aa6bb4fb9d. Jul 7 06:12:54.965390 containerd[1923]: time="2025-07-07T06:12:54.965361951Z" level=info msg="StartContainer for \"791a5abeb81a894e1206d8e7387b5e73c667c4d581302c402ea1f1aa6bb4fb9d\" returns successfully" Jul 7 06:12:55.971964 kubelet[3281]: I0707 06:12:55.971798 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7bd7997cf7-6nmfj" podStartSLOduration=2.591078287 podStartE2EDuration="5.971760045s" podCreationTimestamp="2025-07-07 06:12:50 +0000 UTC" firstStartedPulling="2025-07-07 06:12:51.453840535 +0000 UTC m=+29.680814324" lastFinishedPulling="2025-07-07 06:12:54.83452229 +0000 UTC m=+33.061496082" observedRunningTime="2025-07-07 06:12:55.970446343 +0000 UTC m=+34.197420226" watchObservedRunningTime="2025-07-07 06:12:55.971760045 +0000 UTC m=+34.198733930" Jul 7 06:12:57.825388 containerd[1923]: time="2025-07-07T06:12:57.825263264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cf9698c94-4q2rm,Uid:e310f79c-0355-4c6a-bc79-ee486f94963f,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:12:57.888524 systemd-networkd[1840]: cali2ec5d5877d9: Link UP Jul 7 06:12:57.888814 systemd-networkd[1840]: cali2ec5d5877d9: Gained carrier Jul 7 06:12:57.896994 containerd[1923]: 2025-07-07 06:12:57.837 [INFO][5257] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:12:57.896994 containerd[1923]: 2025-07-07 06:12:57.844 [INFO][5257] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--4q2rm-eth0 calico-apiserver-6cf9698c94- calico-apiserver e310f79c-0355-4c6a-bc79-ee486f94963f 793 0 2025-07-07 06:12:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cf9698c94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-a-2cf65e3e62 calico-apiserver-6cf9698c94-4q2rm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2ec5d5877d9 [] [] }} ContainerID="2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-4q2rm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--4q2rm-" Jul 7 06:12:57.896994 containerd[1923]: 2025-07-07 06:12:57.845 [INFO][5257] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-4q2rm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--4q2rm-eth0" Jul 7 06:12:57.896994 containerd[1923]: 2025-07-07 06:12:57.857 [INFO][5277] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" HandleID="k8s-pod-network.2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--4q2rm-eth0" Jul 7 06:12:57.897242 containerd[1923]: 2025-07-07 06:12:57.858 [INFO][5277] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" HandleID="k8s-pod-network.2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--4q2rm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000525610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-a-2cf65e3e62", "pod":"calico-apiserver-6cf9698c94-4q2rm", "timestamp":"2025-07-07 06:12:57.85797207 +0000 UTC"}, Hostname:"ci-4372.0.1-a-2cf65e3e62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:12:57.897242 containerd[1923]: 2025-07-07 06:12:57.858 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:12:57.897242 containerd[1923]: 2025-07-07 06:12:57.858 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:12:57.897242 containerd[1923]: 2025-07-07 06:12:57.858 [INFO][5277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-2cf65e3e62' Jul 7 06:12:57.897242 containerd[1923]: 2025-07-07 06:12:57.863 [INFO][5277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:57.897242 containerd[1923]: 2025-07-07 06:12:57.866 [INFO][5277] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:57.897242 containerd[1923]: 2025-07-07 06:12:57.870 [INFO][5277] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:57.897242 containerd[1923]: 2025-07-07 06:12:57.871 [INFO][5277] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:57.897242 containerd[1923]: 2025-07-07 06:12:57.873 [INFO][5277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:57.897532 containerd[1923]: 2025-07-07 06:12:57.873 [INFO][5277] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:57.897532 containerd[1923]: 2025-07-07 06:12:57.874 [INFO][5277] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1 Jul 7 06:12:57.897532 containerd[1923]: 2025-07-07 06:12:57.882 [INFO][5277] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:57.897532 containerd[1923]: 2025-07-07 06:12:57.885 [INFO][5277] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.130/26] block=192.168.47.128/26 handle="k8s-pod-network.2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:57.897532 containerd[1923]: 2025-07-07 06:12:57.885 [INFO][5277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.130/26] handle="k8s-pod-network.2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:57.897532 containerd[1923]: 2025-07-07 06:12:57.885 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:12:57.897532 containerd[1923]: 2025-07-07 06:12:57.885 [INFO][5277] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.130/26] IPv6=[] ContainerID="2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" HandleID="k8s-pod-network.2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--4q2rm-eth0" Jul 7 06:12:57.897731 containerd[1923]: 2025-07-07 06:12:57.887 [INFO][5257] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-4q2rm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--4q2rm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--4q2rm-eth0", GenerateName:"calico-apiserver-6cf9698c94-", Namespace:"calico-apiserver", SelfLink:"", UID:"e310f79c-0355-4c6a-bc79-ee486f94963f", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cf9698c94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"", Pod:"calico-apiserver-6cf9698c94-4q2rm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ec5d5877d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:12:57.897810 containerd[1923]: 2025-07-07 06:12:57.887 [INFO][5257] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.130/32] ContainerID="2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-4q2rm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--4q2rm-eth0" Jul 7 06:12:57.897810 containerd[1923]: 2025-07-07 06:12:57.887 [INFO][5257] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ec5d5877d9 ContainerID="2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-4q2rm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--4q2rm-eth0" Jul 7 06:12:57.897810 containerd[1923]: 2025-07-07 06:12:57.889 [INFO][5257] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-4q2rm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--4q2rm-eth0" Jul 7 06:12:57.897910 containerd[1923]: 2025-07-07 06:12:57.889 [INFO][5257] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-4q2rm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--4q2rm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--4q2rm-eth0", GenerateName:"calico-apiserver-6cf9698c94-", Namespace:"calico-apiserver", SelfLink:"", UID:"e310f79c-0355-4c6a-bc79-ee486f94963f", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cf9698c94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1", Pod:"calico-apiserver-6cf9698c94-4q2rm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ec5d5877d9", MAC:"2a:e8:ee:77:7e:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:12:57.897974 containerd[1923]: 2025-07-07 06:12:57.895 [INFO][5257] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-4q2rm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--4q2rm-eth0" Jul 7 06:12:57.922253 containerd[1923]: time="2025-07-07T06:12:57.922213146Z" level=info msg="connecting to shim 2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1" address="unix:///run/containerd/s/92f0f10d48a15f167c35cf4640e6f8bbb2534700978baae00857562e1a0aeb78" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:57.945015 systemd[1]: Started cri-containerd-2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1.scope - libcontainer container 2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1. Jul 7 06:12:57.971241 containerd[1923]: time="2025-07-07T06:12:57.971219653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cf9698c94-4q2rm,Uid:e310f79c-0355-4c6a-bc79-ee486f94963f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1\"" Jul 7 06:12:57.971915 containerd[1923]: time="2025-07-07T06:12:57.971905658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:12:58.825330 containerd[1923]: time="2025-07-07T06:12:58.825248909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nt2hm,Uid:3b203fd7-1f2e-4ab1-b93c-16937892999c,Namespace:kube-system,Attempt:0,}" Jul 7 06:12:58.825637 containerd[1923]: time="2025-07-07T06:12:58.825400287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7f86ddf5-cjvwq,Uid:de6e7e2a-0310-4167-ae5f-165f0cf836d6,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:58.885096 systemd-networkd[1840]: cali50ea5a841bc: Link UP Jul 7 06:12:58.885228 systemd-networkd[1840]: cali50ea5a841bc: Gained carrier Jul 7 06:12:58.891330 containerd[1923]: 2025-07-07 06:12:58.838 [INFO][5398] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:12:58.891330 containerd[1923]: 2025-07-07 06:12:58.846 [INFO][5398] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--nt2hm-eth0 coredns-674b8bbfcf- kube-system 3b203fd7-1f2e-4ab1-b93c-16937892999c 797 0 2025-07-07 06:12:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-a-2cf65e3e62 coredns-674b8bbfcf-nt2hm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali50ea5a841bc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt2hm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--nt2hm-" Jul 7 06:12:58.891330 containerd[1923]: 2025-07-07 06:12:58.846 [INFO][5398] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt2hm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--nt2hm-eth0" Jul 7 06:12:58.891330 containerd[1923]: 2025-07-07 06:12:58.859 [INFO][5441] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" HandleID="k8s-pod-network.c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--nt2hm-eth0" Jul 7 06:12:58.891487 containerd[1923]: 2025-07-07 06:12:58.859 [INFO][5441] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" HandleID="k8s-pod-network.c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--nt2hm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043bbe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-a-2cf65e3e62", "pod":"coredns-674b8bbfcf-nt2hm", "timestamp":"2025-07-07 06:12:58.859367523 +0000 UTC"}, Hostname:"ci-4372.0.1-a-2cf65e3e62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:12:58.891487 containerd[1923]: 2025-07-07 06:12:58.859 [INFO][5441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:12:58.891487 containerd[1923]: 2025-07-07 06:12:58.859 [INFO][5441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:12:58.891487 containerd[1923]: 2025-07-07 06:12:58.859 [INFO][5441] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-2cf65e3e62' Jul 7 06:12:58.891487 containerd[1923]: 2025-07-07 06:12:58.864 [INFO][5441] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.891487 containerd[1923]: 2025-07-07 06:12:58.868 [INFO][5441] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.891487 containerd[1923]: 2025-07-07 06:12:58.871 [INFO][5441] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.891487 containerd[1923]: 2025-07-07 06:12:58.872 [INFO][5441] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.891487 containerd[1923]: 2025-07-07 06:12:58.874 [INFO][5441] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.891639 containerd[1923]: 2025-07-07 06:12:58.874 [INFO][5441] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.891639 containerd[1923]: 2025-07-07 06:12:58.875 [INFO][5441] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33 Jul 7 06:12:58.891639 containerd[1923]: 2025-07-07 06:12:58.880 [INFO][5441] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.891639 containerd[1923]: 2025-07-07 06:12:58.883 [INFO][5441] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.131/26] block=192.168.47.128/26 handle="k8s-pod-network.c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.891639 containerd[1923]: 2025-07-07 06:12:58.883 [INFO][5441] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.131/26] handle="k8s-pod-network.c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.891639 containerd[1923]: 2025-07-07 06:12:58.883 [INFO][5441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:12:58.891639 containerd[1923]: 2025-07-07 06:12:58.883 [INFO][5441] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.131/26] IPv6=[] ContainerID="c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" HandleID="k8s-pod-network.c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--nt2hm-eth0" Jul 7 06:12:58.891779 containerd[1923]: 2025-07-07 06:12:58.884 [INFO][5398] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt2hm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--nt2hm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--nt2hm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3b203fd7-1f2e-4ab1-b93c-16937892999c", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"", Pod:"coredns-674b8bbfcf-nt2hm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50ea5a841bc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:12:58.891779 containerd[1923]: 2025-07-07 06:12:58.884 [INFO][5398] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.131/32] ContainerID="c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt2hm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--nt2hm-eth0" Jul 7 06:12:58.891779 containerd[1923]: 2025-07-07 06:12:58.884 [INFO][5398] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50ea5a841bc ContainerID="c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt2hm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--nt2hm-eth0" Jul 7 06:12:58.891779 containerd[1923]: 2025-07-07 06:12:58.885 [INFO][5398] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt2hm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--nt2hm-eth0" Jul 7 06:12:58.891779 containerd[1923]: 2025-07-07 06:12:58.885 [INFO][5398] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt2hm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--nt2hm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--nt2hm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3b203fd7-1f2e-4ab1-b93c-16937892999c", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33", Pod:"coredns-674b8bbfcf-nt2hm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50ea5a841bc", MAC:"6a:38:69:76:7e:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:12:58.891779 containerd[1923]: 2025-07-07 06:12:58.890 [INFO][5398] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt2hm" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--nt2hm-eth0" Jul 7 06:12:58.899508 containerd[1923]: time="2025-07-07T06:12:58.899478111Z" level=info msg="connecting to shim c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33" address="unix:///run/containerd/s/896a05daa9d2c2f478cc23d0c3c6aed2ee51cb74d829a0f3fc6d01fb91702d7d" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:58.923107 systemd[1]: Started cri-containerd-c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33.scope - libcontainer container c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33. Jul 7 06:12:58.985088 systemd-networkd[1840]: calia5e9c1487c6: Link UP Jul 7 06:12:58.985242 systemd-networkd[1840]: calia5e9c1487c6: Gained carrier Jul 7 06:12:58.996531 containerd[1923]: time="2025-07-07T06:12:58.996501735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nt2hm,Uid:3b203fd7-1f2e-4ab1-b93c-16937892999c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33\"" Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.838 [INFO][5404] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.846 [INFO][5404] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--2cf65e3e62-k8s-calico--kube--controllers--5c7f86ddf5--cjvwq-eth0 calico-kube-controllers-5c7f86ddf5- calico-system de6e7e2a-0310-4167-ae5f-165f0cf836d6 795 0 2025-07-07 06:12:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c7f86ddf5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.0.1-a-2cf65e3e62 calico-kube-controllers-5c7f86ddf5-cjvwq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia5e9c1487c6 [] [] }} ContainerID="6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" Namespace="calico-system" Pod="calico-kube-controllers-5c7f86ddf5-cjvwq" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--kube--controllers--5c7f86ddf5--cjvwq-" Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.846 [INFO][5404] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" Namespace="calico-system" Pod="calico-kube-controllers-5c7f86ddf5-cjvwq" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--kube--controllers--5c7f86ddf5--cjvwq-eth0" Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.859 [INFO][5440] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" HandleID="k8s-pod-network.6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-calico--kube--controllers--5c7f86ddf5--cjvwq-eth0" Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.859 [INFO][5440] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" HandleID="k8s-pod-network.6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-calico--kube--controllers--5c7f86ddf5--cjvwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000788a90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-2cf65e3e62", "pod":"calico-kube-controllers-5c7f86ddf5-cjvwq", "timestamp":"2025-07-07 06:12:58.859451227 +0000 UTC"}, Hostname:"ci-4372.0.1-a-2cf65e3e62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.859 [INFO][5440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.883 [INFO][5440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.883 [INFO][5440] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-2cf65e3e62' Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.966 [INFO][5440] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.969 [INFO][5440] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.974 [INFO][5440] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.976 [INFO][5440] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.977 [INFO][5440] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.977 [INFO][5440] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.978 [INFO][5440] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.980 [INFO][5440] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.982 [INFO][5440] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.132/26] block=192.168.47.128/26 handle="k8s-pod-network.6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.982 [INFO][5440] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.132/26] handle="k8s-pod-network.6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.982 [INFO][5440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:12:58.996963 containerd[1923]: 2025-07-07 06:12:58.982 [INFO][5440] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.132/26] IPv6=[] ContainerID="6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" HandleID="k8s-pod-network.6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-calico--kube--controllers--5c7f86ddf5--cjvwq-eth0" Jul 7 06:12:58.997336 containerd[1923]: 2025-07-07 06:12:58.984 [INFO][5404] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" Namespace="calico-system" Pod="calico-kube-controllers-5c7f86ddf5-cjvwq" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--kube--controllers--5c7f86ddf5--cjvwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-calico--kube--controllers--5c7f86ddf5--cjvwq-eth0", GenerateName:"calico-kube-controllers-5c7f86ddf5-", Namespace:"calico-system", SelfLink:"", UID:"de6e7e2a-0310-4167-ae5f-165f0cf836d6", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c7f86ddf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"", Pod:"calico-kube-controllers-5c7f86ddf5-cjvwq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia5e9c1487c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:12:58.997336 containerd[1923]: 2025-07-07 06:12:58.984 [INFO][5404] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.132/32] ContainerID="6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" Namespace="calico-system" Pod="calico-kube-controllers-5c7f86ddf5-cjvwq" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--kube--controllers--5c7f86ddf5--cjvwq-eth0" Jul 7 06:12:58.997336 containerd[1923]: 2025-07-07 06:12:58.984 [INFO][5404] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5e9c1487c6 ContainerID="6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" Namespace="calico-system" Pod="calico-kube-controllers-5c7f86ddf5-cjvwq" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--kube--controllers--5c7f86ddf5--cjvwq-eth0" Jul 7 06:12:58.997336 containerd[1923]: 2025-07-07 06:12:58.985 [INFO][5404] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" Namespace="calico-system" Pod="calico-kube-controllers-5c7f86ddf5-cjvwq" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--kube--controllers--5c7f86ddf5--cjvwq-eth0" Jul 7 06:12:58.997336 containerd[1923]: 2025-07-07 06:12:58.986 [INFO][5404] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" Namespace="calico-system" Pod="calico-kube-controllers-5c7f86ddf5-cjvwq" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--kube--controllers--5c7f86ddf5--cjvwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-calico--kube--controllers--5c7f86ddf5--cjvwq-eth0", GenerateName:"calico-kube-controllers-5c7f86ddf5-", Namespace:"calico-system", SelfLink:"", UID:"de6e7e2a-0310-4167-ae5f-165f0cf836d6", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c7f86ddf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f", Pod:"calico-kube-controllers-5c7f86ddf5-cjvwq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia5e9c1487c6", MAC:"6e:d9:8a:89:08:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:12:58.997336 containerd[1923]: 2025-07-07 06:12:58.995 [INFO][5404] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" Namespace="calico-system" Pod="calico-kube-controllers-5c7f86ddf5-cjvwq" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--kube--controllers--5c7f86ddf5--cjvwq-eth0" Jul 7 06:12:58.998914 containerd[1923]: time="2025-07-07T06:12:58.998885509Z" level=info msg="CreateContainer within sandbox \"c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:12:59.003382 containerd[1923]: time="2025-07-07T06:12:59.003355165Z" level=info msg="Container 72e7d0633e7cdae365f08679c20012fd1656db4b0ce9b949e5ee867c36757eb0: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:59.006196 containerd[1923]: time="2025-07-07T06:12:59.006165259Z" level=info msg="CreateContainer within sandbox \"c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72e7d0633e7cdae365f08679c20012fd1656db4b0ce9b949e5ee867c36757eb0\"" Jul 7 06:12:59.006391 containerd[1923]: time="2025-07-07T06:12:59.006372796Z" level=info msg="connecting to shim 6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f" address="unix:///run/containerd/s/dcae22f466a509e3ec0ad9c6e06fd38645bd4b21cc2244b2d90faff285bed23c" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:59.006738 containerd[1923]: time="2025-07-07T06:12:59.006722546Z" level=info msg="StartContainer for \"72e7d0633e7cdae365f08679c20012fd1656db4b0ce9b949e5ee867c36757eb0\"" Jul 7 06:12:59.007303 containerd[1923]: time="2025-07-07T06:12:59.007290941Z" level=info msg="connecting to shim 72e7d0633e7cdae365f08679c20012fd1656db4b0ce9b949e5ee867c36757eb0" address="unix:///run/containerd/s/896a05daa9d2c2f478cc23d0c3c6aed2ee51cb74d829a0f3fc6d01fb91702d7d" protocol=ttrpc version=3 Jul 7 06:12:59.030861 systemd[1]: Started cri-containerd-72e7d0633e7cdae365f08679c20012fd1656db4b0ce9b949e5ee867c36757eb0.scope - libcontainer container 72e7d0633e7cdae365f08679c20012fd1656db4b0ce9b949e5ee867c36757eb0. Jul 7 06:12:59.032614 systemd[1]: Started cri-containerd-6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f.scope - libcontainer container 6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f. Jul 7 06:12:59.043889 containerd[1923]: time="2025-07-07T06:12:59.043864850Z" level=info msg="StartContainer for \"72e7d0633e7cdae365f08679c20012fd1656db4b0ce9b949e5ee867c36757eb0\" returns successfully" Jul 7 06:12:59.053736 systemd-networkd[1840]: cali2ec5d5877d9: Gained IPv6LL Jul 7 06:12:59.059410 containerd[1923]: time="2025-07-07T06:12:59.059384031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7f86ddf5-cjvwq,Uid:de6e7e2a-0310-4167-ae5f-165f0cf836d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f\"" Jul 7 06:12:59.824005 containerd[1923]: time="2025-07-07T06:12:59.823979383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fpfpz,Uid:f039077b-1995-4908-9f12-7dced13a4d6a,Namespace:calico-system,Attempt:0,}" Jul 7 06:12:59.885482 systemd-networkd[1840]: calic6313ee35af: Link UP Jul 7 06:12:59.885637 systemd-networkd[1840]: calic6313ee35af: Gained carrier Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.836 [INFO][5677] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.843 [INFO][5677] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--2cf65e3e62-k8s-goldmane--768f4c5c69--fpfpz-eth0 goldmane-768f4c5c69- calico-system f039077b-1995-4908-9f12-7dced13a4d6a 800 0 2025-07-07 06:12:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.0.1-a-2cf65e3e62 goldmane-768f4c5c69-fpfpz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic6313ee35af [] [] }} ContainerID="50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" Namespace="calico-system" Pod="goldmane-768f4c5c69-fpfpz" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-goldmane--768f4c5c69--fpfpz-" Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.843 [INFO][5677] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" Namespace="calico-system" Pod="goldmane-768f4c5c69-fpfpz" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-goldmane--768f4c5c69--fpfpz-eth0" Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.858 [INFO][5700] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" HandleID="k8s-pod-network.50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-goldmane--768f4c5c69--fpfpz-eth0" Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.858 [INFO][5700] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" HandleID="k8s-pod-network.50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-goldmane--768f4c5c69--fpfpz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d16b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-2cf65e3e62", "pod":"goldmane-768f4c5c69-fpfpz", "timestamp":"2025-07-07 06:12:59.858124369 +0000 UTC"}, Hostname:"ci-4372.0.1-a-2cf65e3e62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.858 [INFO][5700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.858 [INFO][5700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.858 [INFO][5700] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-2cf65e3e62' Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.863 [INFO][5700] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.867 [INFO][5700] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.871 [INFO][5700] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.872 [INFO][5700] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.873 [INFO][5700] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.873 [INFO][5700] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.874 [INFO][5700] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5 Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.880 [INFO][5700] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.883 [INFO][5700] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.133/26] block=192.168.47.128/26 handle="k8s-pod-network.50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.883 [INFO][5700] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.133/26] handle="k8s-pod-network.50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.883 [INFO][5700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:12:59.891580 containerd[1923]: 2025-07-07 06:12:59.883 [INFO][5700] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.133/26] IPv6=[] ContainerID="50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" HandleID="k8s-pod-network.50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-goldmane--768f4c5c69--fpfpz-eth0" Jul 7 06:12:59.892097 containerd[1923]: 2025-07-07 06:12:59.884 [INFO][5677] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" Namespace="calico-system" Pod="goldmane-768f4c5c69-fpfpz" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-goldmane--768f4c5c69--fpfpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-goldmane--768f4c5c69--fpfpz-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"f039077b-1995-4908-9f12-7dced13a4d6a", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"", Pod:"goldmane-768f4c5c69-fpfpz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic6313ee35af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:12:59.892097 containerd[1923]: 2025-07-07 06:12:59.884 [INFO][5677] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.133/32] ContainerID="50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" Namespace="calico-system" Pod="goldmane-768f4c5c69-fpfpz" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-goldmane--768f4c5c69--fpfpz-eth0" Jul 7 06:12:59.892097 containerd[1923]: 2025-07-07 06:12:59.884 [INFO][5677] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6313ee35af ContainerID="50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" Namespace="calico-system" Pod="goldmane-768f4c5c69-fpfpz" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-goldmane--768f4c5c69--fpfpz-eth0" Jul 7 06:12:59.892097 containerd[1923]: 2025-07-07 06:12:59.885 [INFO][5677] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" Namespace="calico-system" Pod="goldmane-768f4c5c69-fpfpz" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-goldmane--768f4c5c69--fpfpz-eth0" Jul 7 06:12:59.892097 containerd[1923]: 2025-07-07 06:12:59.885 [INFO][5677] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" Namespace="calico-system" Pod="goldmane-768f4c5c69-fpfpz" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-goldmane--768f4c5c69--fpfpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-goldmane--768f4c5c69--fpfpz-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"f039077b-1995-4908-9f12-7dced13a4d6a", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5", Pod:"goldmane-768f4c5c69-fpfpz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic6313ee35af", MAC:"fe:0a:da:49:b8:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:12:59.892097 containerd[1923]: 2025-07-07 06:12:59.890 [INFO][5677] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" Namespace="calico-system" Pod="goldmane-768f4c5c69-fpfpz" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-goldmane--768f4c5c69--fpfpz-eth0" Jul 7 06:12:59.899552 containerd[1923]: time="2025-07-07T06:12:59.899526518Z" level=info msg="connecting to shim 50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5" address="unix:///run/containerd/s/729330c6487df0c3c33b99c2d4b17425fa080eefbb20fecb12d93e3dde4d8f48" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:12:59.914730 systemd[1]: Started cri-containerd-50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5.scope - libcontainer container 50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5. Jul 7 06:12:59.939000 containerd[1923]: time="2025-07-07T06:12:59.938972707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:59.939225 containerd[1923]: time="2025-07-07T06:12:59.939206499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 06:12:59.939641 containerd[1923]: time="2025-07-07T06:12:59.939627864Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:59.940555 containerd[1923]: time="2025-07-07T06:12:59.940507218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-fpfpz,Uid:f039077b-1995-4908-9f12-7dced13a4d6a,Namespace:calico-system,Attempt:0,} returns sandbox id \"50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5\"" Jul 7 06:12:59.940555 containerd[1923]: time="2025-07-07T06:12:59.940525607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:59.940916 containerd[1923]: time="2025-07-07T06:12:59.940877141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.968956914s" Jul 7 06:12:59.940916 containerd[1923]: time="2025-07-07T06:12:59.940890626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:12:59.941254 containerd[1923]: time="2025-07-07T06:12:59.941242416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:12:59.942286 containerd[1923]: time="2025-07-07T06:12:59.942273238Z" level=info msg="CreateContainer within sandbox \"2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:12:59.944711 containerd[1923]: time="2025-07-07T06:12:59.944673568Z" level=info msg="Container a66c1565f31be9733095dddc0027d0ba5e3fb1c6eae6b51fe8b598d8e8abafc0: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:59.947206 containerd[1923]: time="2025-07-07T06:12:59.947164726Z" level=info msg="CreateContainer within sandbox \"2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a66c1565f31be9733095dddc0027d0ba5e3fb1c6eae6b51fe8b598d8e8abafc0\"" Jul 7 06:12:59.947383 containerd[1923]: time="2025-07-07T06:12:59.947369626Z" level=info msg="StartContainer for \"a66c1565f31be9733095dddc0027d0ba5e3fb1c6eae6b51fe8b598d8e8abafc0\"" Jul 7 06:12:59.947911 containerd[1923]: time="2025-07-07T06:12:59.947867345Z" level=info msg="connecting to shim a66c1565f31be9733095dddc0027d0ba5e3fb1c6eae6b51fe8b598d8e8abafc0" address="unix:///run/containerd/s/92f0f10d48a15f167c35cf4640e6f8bbb2534700978baae00857562e1a0aeb78" protocol=ttrpc version=3 Jul 7 06:12:59.963764 systemd[1]: Started cri-containerd-a66c1565f31be9733095dddc0027d0ba5e3fb1c6eae6b51fe8b598d8e8abafc0.scope - libcontainer container a66c1565f31be9733095dddc0027d0ba5e3fb1c6eae6b51fe8b598d8e8abafc0. Jul 7 06:12:59.969455 kubelet[3281]: I0707 06:12:59.969410 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nt2hm" podStartSLOduration=31.969395935 podStartE2EDuration="31.969395935s" podCreationTimestamp="2025-07-07 06:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:12:59.969030901 +0000 UTC m=+38.196004692" watchObservedRunningTime="2025-07-07 06:12:59.969395935 +0000 UTC m=+38.196369725" Jul 7 06:12:59.992090 containerd[1923]: time="2025-07-07T06:12:59.992041181Z" level=info msg="StartContainer for \"a66c1565f31be9733095dddc0027d0ba5e3fb1c6eae6b51fe8b598d8e8abafc0\" returns successfully" Jul 7 06:13:00.716861 systemd-networkd[1840]: cali50ea5a841bc: Gained IPv6LL Jul 7 06:13:00.825554 containerd[1923]: time="2025-07-07T06:13:00.825448678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rgmw9,Uid:3b3041ea-8291-4b09-bfe8-d14c2bd2ab11,Namespace:calico-system,Attempt:0,}" Jul 7 06:13:00.825554 containerd[1923]: time="2025-07-07T06:13:00.825526435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fbsmv,Uid:1d759a72-d0fc-47cd-8309-fee81138534e,Namespace:kube-system,Attempt:0,}" Jul 7 06:13:00.884920 systemd-networkd[1840]: cali1ff116bc951: Link UP Jul 7 06:13:00.885070 systemd-networkd[1840]: cali1ff116bc951: Gained carrier Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.843 [INFO][5884] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.850 [INFO][5884] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--fbsmv-eth0 coredns-674b8bbfcf- kube-system 1d759a72-d0fc-47cd-8309-fee81138534e 799 0 2025-07-07 06:12:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-a-2cf65e3e62 coredns-674b8bbfcf-fbsmv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1ff116bc951 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fbsmv" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--fbsmv-" Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.850 [INFO][5884] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fbsmv" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--fbsmv-eth0" Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.862 [INFO][5921] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" HandleID="k8s-pod-network.bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--fbsmv-eth0" Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.862 [INFO][5921] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" HandleID="k8s-pod-network.bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--fbsmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a57a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-a-2cf65e3e62", "pod":"coredns-674b8bbfcf-fbsmv", "timestamp":"2025-07-07 06:13:00.862079521 +0000 UTC"}, Hostname:"ci-4372.0.1-a-2cf65e3e62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.862 [INFO][5921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.862 [INFO][5921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.862 [INFO][5921] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-2cf65e3e62' Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.867 [INFO][5921] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.870 [INFO][5921] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.874 [INFO][5921] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.875 [INFO][5921] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.876 [INFO][5921] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.876 [INFO][5921] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.877 [INFO][5921] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.880 [INFO][5921] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.883 [INFO][5921] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.134/26] block=192.168.47.128/26 handle="k8s-pod-network.bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.883 [INFO][5921] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.134/26] handle="k8s-pod-network.bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.883 [INFO][5921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:00.903579 containerd[1923]: 2025-07-07 06:13:00.883 [INFO][5921] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.134/26] IPv6=[] ContainerID="bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" HandleID="k8s-pod-network.bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--fbsmv-eth0" Jul 7 06:13:00.904087 containerd[1923]: 2025-07-07 06:13:00.883 [INFO][5884] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fbsmv" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--fbsmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--fbsmv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1d759a72-d0fc-47cd-8309-fee81138534e", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"", Pod:"coredns-674b8bbfcf-fbsmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ff116bc951", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:00.904087 containerd[1923]: 2025-07-07 06:13:00.884 [INFO][5884] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.134/32] ContainerID="bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fbsmv" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--fbsmv-eth0" Jul 7 06:13:00.904087 containerd[1923]: 2025-07-07 06:13:00.884 [INFO][5884] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ff116bc951 ContainerID="bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fbsmv" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--fbsmv-eth0" Jul 7 06:13:00.904087 containerd[1923]: 2025-07-07 06:13:00.885 [INFO][5884] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fbsmv" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--fbsmv-eth0" Jul 7 06:13:00.904087 containerd[1923]: 2025-07-07 06:13:00.885 [INFO][5884] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fbsmv" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--fbsmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--fbsmv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1d759a72-d0fc-47cd-8309-fee81138534e", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc", Pod:"coredns-674b8bbfcf-fbsmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ff116bc951", MAC:"76:84:2b:3b:01:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:00.904087 containerd[1923]: 2025-07-07 06:13:00.902 [INFO][5884] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" Namespace="kube-system" Pod="coredns-674b8bbfcf-fbsmv" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-coredns--674b8bbfcf--fbsmv-eth0" Jul 7 06:13:00.910893 containerd[1923]: time="2025-07-07T06:13:00.910867796Z" level=info msg="connecting to shim bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc" address="unix:///run/containerd/s/54f1a44f8a034e9cebc96ae908d796da2f570404dad44f8d1ba21f1489684177" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:13:00.931964 systemd[1]: Started cri-containerd-bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc.scope - libcontainer container bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc. Jul 7 06:13:00.957054 containerd[1923]: time="2025-07-07T06:13:00.957031020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fbsmv,Uid:1d759a72-d0fc-47cd-8309-fee81138534e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc\"" Jul 7 06:13:00.958850 containerd[1923]: time="2025-07-07T06:13:00.958836869Z" level=info msg="CreateContainer within sandbox \"bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:13:00.961563 containerd[1923]: time="2025-07-07T06:13:00.961552882Z" level=info msg="Container 270340ecf5eeb81fe55c84598a82795cb4eea03350edc7e30f3ba236ad203372: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:00.963471 containerd[1923]: time="2025-07-07T06:13:00.963431257Z" level=info msg="CreateContainer within sandbox \"bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"270340ecf5eeb81fe55c84598a82795cb4eea03350edc7e30f3ba236ad203372\"" Jul 7 06:13:00.963574 containerd[1923]: time="2025-07-07T06:13:00.963564370Z" level=info msg="StartContainer for \"270340ecf5eeb81fe55c84598a82795cb4eea03350edc7e30f3ba236ad203372\"" Jul 7 06:13:00.964094 containerd[1923]: time="2025-07-07T06:13:00.964079014Z" level=info msg="connecting to shim 270340ecf5eeb81fe55c84598a82795cb4eea03350edc7e30f3ba236ad203372" address="unix:///run/containerd/s/54f1a44f8a034e9cebc96ae908d796da2f570404dad44f8d1ba21f1489684177" protocol=ttrpc version=3 Jul 7 06:13:00.970479 kubelet[3281]: I0707 06:13:00.970337 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cf9698c94-4q2rm" podStartSLOduration=24.000877518 podStartE2EDuration="25.970312645s" podCreationTimestamp="2025-07-07 06:12:35 +0000 UTC" firstStartedPulling="2025-07-07 06:12:57.971769071 +0000 UTC m=+36.198742862" lastFinishedPulling="2025-07-07 06:12:59.941204198 +0000 UTC m=+38.168177989" observedRunningTime="2025-07-07 06:13:00.970273994 +0000 UTC m=+39.197247789" watchObservedRunningTime="2025-07-07 06:13:00.970312645 +0000 UTC m=+39.197286436" Jul 7 06:13:00.973748 systemd-networkd[1840]: calia5e9c1487c6: Gained IPv6LL Jul 7 06:13:00.980804 systemd[1]: Started cri-containerd-270340ecf5eeb81fe55c84598a82795cb4eea03350edc7e30f3ba236ad203372.scope - libcontainer container 270340ecf5eeb81fe55c84598a82795cb4eea03350edc7e30f3ba236ad203372. Jul 7 06:13:00.983800 systemd-networkd[1840]: caliaf7429630c8: Link UP Jul 7 06:13:00.983961 systemd-networkd[1840]: caliaf7429630c8: Gained carrier Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.843 [INFO][5879] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.851 [INFO][5879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--2cf65e3e62-k8s-csi--node--driver--rgmw9-eth0 csi-node-driver- calico-system 3b3041ea-8291-4b09-bfe8-d14c2bd2ab11 689 0 2025-07-07 06:12:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.0.1-a-2cf65e3e62 csi-node-driver-rgmw9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaf7429630c8 [] [] }} ContainerID="7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" Namespace="calico-system" Pod="csi-node-driver-rgmw9" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-csi--node--driver--rgmw9-" Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.851 [INFO][5879] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" Namespace="calico-system" Pod="csi-node-driver-rgmw9" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-csi--node--driver--rgmw9-eth0" Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.862 [INFO][5923] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" HandleID="k8s-pod-network.7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-csi--node--driver--rgmw9-eth0" Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.862 [INFO][5923] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" HandleID="k8s-pod-network.7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-csi--node--driver--rgmw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-a-2cf65e3e62", "pod":"csi-node-driver-rgmw9", "timestamp":"2025-07-07 06:13:00.862469551 +0000 UTC"}, Hostname:"ci-4372.0.1-a-2cf65e3e62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.862 [INFO][5923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.883 [INFO][5923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.883 [INFO][5923] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-2cf65e3e62' Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.966 [INFO][5923] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.969 [INFO][5923] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.974 [INFO][5923] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.975 [INFO][5923] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.976 [INFO][5923] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.976 [INFO][5923] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.977 [INFO][5923] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.979 [INFO][5923] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.981 [INFO][5923] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.135/26] block=192.168.47.128/26 handle="k8s-pod-network.7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.981 [INFO][5923] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.135/26] handle="k8s-pod-network.7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.981 [INFO][5923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:00.990497 containerd[1923]: 2025-07-07 06:13:00.981 [INFO][5923] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.135/26] IPv6=[] ContainerID="7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" HandleID="k8s-pod-network.7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-csi--node--driver--rgmw9-eth0" Jul 7 06:13:00.990921 containerd[1923]: 2025-07-07 06:13:00.982 [INFO][5879] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" Namespace="calico-system" Pod="csi-node-driver-rgmw9" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-csi--node--driver--rgmw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-csi--node--driver--rgmw9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3b3041ea-8291-4b09-bfe8-d14c2bd2ab11", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"", Pod:"csi-node-driver-rgmw9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf7429630c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:00.990921 containerd[1923]: 2025-07-07 06:13:00.983 [INFO][5879] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.135/32] ContainerID="7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" Namespace="calico-system" Pod="csi-node-driver-rgmw9" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-csi--node--driver--rgmw9-eth0" Jul 7 06:13:00.990921 containerd[1923]: 2025-07-07 06:13:00.983 [INFO][5879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf7429630c8 ContainerID="7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" Namespace="calico-system" Pod="csi-node-driver-rgmw9" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-csi--node--driver--rgmw9-eth0" Jul 7 06:13:00.990921 containerd[1923]: 2025-07-07 06:13:00.984 [INFO][5879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" Namespace="calico-system" Pod="csi-node-driver-rgmw9" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-csi--node--driver--rgmw9-eth0" Jul 7 06:13:00.990921 containerd[1923]: 2025-07-07 06:13:00.984 [INFO][5879] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" Namespace="calico-system" Pod="csi-node-driver-rgmw9" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-csi--node--driver--rgmw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-csi--node--driver--rgmw9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3b3041ea-8291-4b09-bfe8-d14c2bd2ab11", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c", Pod:"csi-node-driver-rgmw9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf7429630c8", MAC:"f6:bc:58:0e:52:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:00.990921 containerd[1923]: 2025-07-07 06:13:00.989 [INFO][5879] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" Namespace="calico-system" Pod="csi-node-driver-rgmw9" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-csi--node--driver--rgmw9-eth0" Jul 7 06:13:00.994903 containerd[1923]: time="2025-07-07T06:13:00.994872867Z" level=info msg="StartContainer for \"270340ecf5eeb81fe55c84598a82795cb4eea03350edc7e30f3ba236ad203372\" returns successfully" Jul 7 06:13:00.998853 containerd[1923]: time="2025-07-07T06:13:00.998813189Z" level=info msg="connecting to shim 7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c" address="unix:///run/containerd/s/936c4e0a89cf5c386589a962e463c236bc68d43a42530fc42934fdda1b1e7f5f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:13:01.017852 systemd[1]: Started cri-containerd-7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c.scope - libcontainer container 7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c. Jul 7 06:13:01.029313 containerd[1923]: time="2025-07-07T06:13:01.029291800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rgmw9,Uid:3b3041ea-8291-4b09-bfe8-d14c2bd2ab11,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c\"" Jul 7 06:13:01.484980 systemd-networkd[1840]: calic6313ee35af: Gained IPv6LL Jul 7 06:13:01.824142 containerd[1923]: time="2025-07-07T06:13:01.824111916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cf9698c94-kmd2s,Uid:99b5b42d-97bc-4221-9141-472378a6b5d5,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:13:01.893918 systemd-networkd[1840]: calid96321bbae8: Link UP Jul 7 06:13:01.894089 systemd-networkd[1840]: calid96321bbae8: Gained carrier Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.835 [INFO][6161] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.842 [INFO][6161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--kmd2s-eth0 calico-apiserver-6cf9698c94- calico-apiserver 99b5b42d-97bc-4221-9141-472378a6b5d5 796 0 2025-07-07 06:12:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cf9698c94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-a-2cf65e3e62 calico-apiserver-6cf9698c94-kmd2s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid96321bbae8 [] [] }} ContainerID="f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-kmd2s" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--kmd2s-" Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.842 [INFO][6161] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-kmd2s" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--kmd2s-eth0" Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.856 [INFO][6185] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" HandleID="k8s-pod-network.f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--kmd2s-eth0" Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.856 [INFO][6185] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" HandleID="k8s-pod-network.f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--kmd2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e5d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-a-2cf65e3e62", "pod":"calico-apiserver-6cf9698c94-kmd2s", "timestamp":"2025-07-07 06:13:01.856288153 +0000 UTC"}, Hostname:"ci-4372.0.1-a-2cf65e3e62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.856 [INFO][6185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.856 [INFO][6185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.856 [INFO][6185] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-a-2cf65e3e62' Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.861 [INFO][6185] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.865 [INFO][6185] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.868 [INFO][6185] ipam/ipam.go 511: Trying affinity for 192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.869 [INFO][6185] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.871 [INFO][6185] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.871 [INFO][6185] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.872 [INFO][6185] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767 Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.889 [INFO][6185] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.891 [INFO][6185] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.47.136/26] block=192.168.47.128/26 handle="k8s-pod-network.f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.891 [INFO][6185] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.136/26] handle="k8s-pod-network.f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" host="ci-4372.0.1-a-2cf65e3e62" Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.891 [INFO][6185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:13:01.899621 containerd[1923]: 2025-07-07 06:13:01.891 [INFO][6185] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.136/26] IPv6=[] ContainerID="f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" HandleID="k8s-pod-network.f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" Workload="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--kmd2s-eth0" Jul 7 06:13:01.900050 containerd[1923]: 2025-07-07 06:13:01.892 [INFO][6161] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-kmd2s" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--kmd2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--kmd2s-eth0", GenerateName:"calico-apiserver-6cf9698c94-", Namespace:"calico-apiserver", SelfLink:"", UID:"99b5b42d-97bc-4221-9141-472378a6b5d5", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cf9698c94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"", Pod:"calico-apiserver-6cf9698c94-kmd2s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid96321bbae8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:01.900050 containerd[1923]: 2025-07-07 06:13:01.893 [INFO][6161] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.136/32] ContainerID="f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-kmd2s" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--kmd2s-eth0" Jul 7 06:13:01.900050 containerd[1923]: 2025-07-07 06:13:01.893 [INFO][6161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid96321bbae8 ContainerID="f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-kmd2s" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--kmd2s-eth0" Jul 7 06:13:01.900050 containerd[1923]: 2025-07-07 06:13:01.894 [INFO][6161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-kmd2s" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--kmd2s-eth0" Jul 7 06:13:01.900050 containerd[1923]: 2025-07-07 06:13:01.894 [INFO][6161] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-kmd2s" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--kmd2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--kmd2s-eth0", GenerateName:"calico-apiserver-6cf9698c94-", Namespace:"calico-apiserver", SelfLink:"", UID:"99b5b42d-97bc-4221-9141-472378a6b5d5", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 12, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cf9698c94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-a-2cf65e3e62", ContainerID:"f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767", Pod:"calico-apiserver-6cf9698c94-kmd2s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid96321bbae8", MAC:"3e:2b:6d:3c:70:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:13:01.900050 containerd[1923]: 2025-07-07 06:13:01.898 [INFO][6161] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" Namespace="calico-apiserver" Pod="calico-apiserver-6cf9698c94-kmd2s" WorkloadEndpoint="ci--4372.0.1--a--2cf65e3e62-k8s-calico--apiserver--6cf9698c94--kmd2s-eth0" Jul 7 06:13:01.907803 containerd[1923]: time="2025-07-07T06:13:01.907767998Z" level=info msg="connecting to shim f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767" address="unix:///run/containerd/s/791801a505ded155720732910af09f418a2b8acd3e2cce07bcbb88cad74fc684" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:13:01.921159 containerd[1923]: time="2025-07-07T06:13:01.921110121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:01.921382 containerd[1923]: time="2025-07-07T06:13:01.921345639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 06:13:01.921708 containerd[1923]: time="2025-07-07T06:13:01.921667181Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:01.923776 containerd[1923]: time="2025-07-07T06:13:01.923730556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:01.924136 containerd[1923]: time="2025-07-07T06:13:01.924094816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 1.982838291s" Jul 7 06:13:01.924136 containerd[1923]: time="2025-07-07T06:13:01.924109552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 06:13:01.924571 containerd[1923]: time="2025-07-07T06:13:01.924558933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:13:01.924799 systemd[1]: Started cri-containerd-f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767.scope - libcontainer container f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767. Jul 7 06:13:01.928416 containerd[1923]: time="2025-07-07T06:13:01.928395968Z" level=info msg="CreateContainer within sandbox \"6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:13:01.931269 containerd[1923]: time="2025-07-07T06:13:01.931252349Z" level=info msg="Container 443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:01.933831 containerd[1923]: time="2025-07-07T06:13:01.933818930Z" level=info msg="CreateContainer within sandbox \"6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\"" Jul 7 06:13:01.934049 containerd[1923]: time="2025-07-07T06:13:01.934038386Z" level=info msg="StartContainer for \"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\"" Jul 7 06:13:01.934577 containerd[1923]: time="2025-07-07T06:13:01.934566096Z" level=info msg="connecting to shim 443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc" address="unix:///run/containerd/s/dcae22f466a509e3ec0ad9c6e06fd38645bd4b21cc2244b2d90faff285bed23c" protocol=ttrpc version=3 Jul 7 06:13:01.941036 systemd[1]: Started cri-containerd-443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc.scope - libcontainer container 443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc. Jul 7 06:13:01.951673 containerd[1923]: time="2025-07-07T06:13:01.951619357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cf9698c94-kmd2s,Uid:99b5b42d-97bc-4221-9141-472378a6b5d5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767\"" Jul 7 06:13:01.953499 containerd[1923]: time="2025-07-07T06:13:01.953485041Z" level=info msg="CreateContainer within sandbox \"f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:13:01.956308 containerd[1923]: time="2025-07-07T06:13:01.956294445Z" level=info msg="Container 1f4322b9336cb5bcddbbeef494e3941ef5d1a01e2b40a8a0cc830c4d418a70ff: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:01.959176 containerd[1923]: time="2025-07-07T06:13:01.959132011Z" level=info msg="CreateContainer within sandbox \"f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1f4322b9336cb5bcddbbeef494e3941ef5d1a01e2b40a8a0cc830c4d418a70ff\"" Jul 7 06:13:01.959398 containerd[1923]: time="2025-07-07T06:13:01.959350249Z" level=info msg="StartContainer for \"1f4322b9336cb5bcddbbeef494e3941ef5d1a01e2b40a8a0cc830c4d418a70ff\"" Jul 7 06:13:01.959881 containerd[1923]: time="2025-07-07T06:13:01.959868853Z" level=info msg="connecting to shim 1f4322b9336cb5bcddbbeef494e3941ef5d1a01e2b40a8a0cc830c4d418a70ff" address="unix:///run/containerd/s/791801a505ded155720732910af09f418a2b8acd3e2cce07bcbb88cad74fc684" protocol=ttrpc version=3 Jul 7 06:13:01.969004 kubelet[3281]: I0707 06:13:01.968983 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:13:01.974973 systemd[1]: Started cri-containerd-1f4322b9336cb5bcddbbeef494e3941ef5d1a01e2b40a8a0cc830c4d418a70ff.scope - libcontainer container 1f4322b9336cb5bcddbbeef494e3941ef5d1a01e2b40a8a0cc830c4d418a70ff. Jul 7 06:13:01.975935 kubelet[3281]: I0707 06:13:01.975873 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fbsmv" podStartSLOduration=33.975844694 podStartE2EDuration="33.975844694s" podCreationTimestamp="2025-07-07 06:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:13:01.975222171 +0000 UTC m=+40.202195964" watchObservedRunningTime="2025-07-07 06:13:01.975844694 +0000 UTC m=+40.202818483" Jul 7 06:13:01.977700 containerd[1923]: time="2025-07-07T06:13:01.977675315Z" level=info msg="StartContainer for \"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" returns successfully" Jul 7 06:13:02.008293 containerd[1923]: time="2025-07-07T06:13:02.008263964Z" level=info msg="StartContainer for \"1f4322b9336cb5bcddbbeef494e3941ef5d1a01e2b40a8a0cc830c4d418a70ff\" returns successfully" Jul 7 06:13:02.188760 systemd-networkd[1840]: cali1ff116bc951: Gained IPv6LL Jul 7 06:13:02.636880 systemd-networkd[1840]: caliaf7429630c8: Gained IPv6LL Jul 7 06:13:02.980194 kubelet[3281]: I0707 06:13:02.980033 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cf9698c94-kmd2s" podStartSLOduration=27.980006835 podStartE2EDuration="27.980006835s" podCreationTimestamp="2025-07-07 06:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:13:02.979587173 +0000 UTC m=+41.206560964" watchObservedRunningTime="2025-07-07 06:13:02.980006835 +0000 UTC m=+41.206980624" Jul 7 06:13:02.996119 kubelet[3281]: I0707 06:13:02.996080 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c7f86ddf5-cjvwq" podStartSLOduration=22.131583486 podStartE2EDuration="24.996066386s" podCreationTimestamp="2025-07-07 06:12:38 +0000 UTC" firstStartedPulling="2025-07-07 06:12:59.060024892 +0000 UTC m=+37.286998680" lastFinishedPulling="2025-07-07 06:13:01.924507789 +0000 UTC m=+40.151481580" observedRunningTime="2025-07-07 06:13:02.995576757 +0000 UTC m=+41.222550549" watchObservedRunningTime="2025-07-07 06:13:02.996066386 +0000 UTC m=+41.223040175" Jul 7 06:13:03.012308 containerd[1923]: time="2025-07-07T06:13:03.012286551Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"91e02ec4a42aa78140b6db777e83d000a112780bf943b66841386ab76f65160c\" pid:6426 exited_at:{seconds:1751868783 nanos:12126982}" Jul 7 06:13:03.276709 systemd-networkd[1840]: calid96321bbae8: Gained IPv6LL Jul 7 06:13:03.891964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount375276318.mount: Deactivated successfully. Jul 7 06:13:04.097287 containerd[1923]: time="2025-07-07T06:13:04.097258443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:04.097519 containerd[1923]: time="2025-07-07T06:13:04.097490089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 06:13:04.097870 containerd[1923]: time="2025-07-07T06:13:04.097857435Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:04.098730 containerd[1923]: time="2025-07-07T06:13:04.098717531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:04.099117 containerd[1923]: time="2025-07-07T06:13:04.099106214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 2.174534298s" Jul 7 06:13:04.099142 containerd[1923]: time="2025-07-07T06:13:04.099121637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 06:13:04.099581 containerd[1923]: time="2025-07-07T06:13:04.099572513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:13:04.100648 containerd[1923]: time="2025-07-07T06:13:04.100631404Z" level=info msg="CreateContainer within sandbox \"50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:13:04.103335 containerd[1923]: time="2025-07-07T06:13:04.103322043Z" level=info msg="Container 006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:04.105881 containerd[1923]: time="2025-07-07T06:13:04.105846388Z" level=info msg="CreateContainer within sandbox \"50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\"" Jul 7 06:13:04.106114 containerd[1923]: time="2025-07-07T06:13:04.106061527Z" level=info msg="StartContainer for \"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\"" Jul 7 06:13:04.106648 containerd[1923]: time="2025-07-07T06:13:04.106603048Z" level=info msg="connecting to shim 006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47" address="unix:///run/containerd/s/729330c6487df0c3c33b99c2d4b17425fa080eefbb20fecb12d93e3dde4d8f48" protocol=ttrpc version=3 Jul 7 06:13:04.126934 systemd[1]: Started cri-containerd-006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47.scope - libcontainer container 006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47. Jul 7 06:13:04.153994 containerd[1923]: time="2025-07-07T06:13:04.153904282Z" level=info msg="StartContainer for \"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" returns successfully" Jul 7 06:13:05.563971 containerd[1923]: time="2025-07-07T06:13:05.563918560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:05.564197 containerd[1923]: time="2025-07-07T06:13:05.564138350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 06:13:05.564514 containerd[1923]: time="2025-07-07T06:13:05.564471487Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:05.565562 containerd[1923]: time="2025-07-07T06:13:05.565513629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:05.565915 containerd[1923]: time="2025-07-07T06:13:05.565875671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.466289022s" Jul 7 06:13:05.565915 containerd[1923]: time="2025-07-07T06:13:05.565890000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 06:13:05.567790 containerd[1923]: time="2025-07-07T06:13:05.567779175Z" level=info msg="CreateContainer within sandbox \"7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:13:05.571847 containerd[1923]: time="2025-07-07T06:13:05.571835033Z" level=info msg="Container 6aaf76fbb52d8ec83865a536a625d4867075425fb4ee110e78a36fa67790da8e: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:05.575736 containerd[1923]: time="2025-07-07T06:13:05.575696238Z" level=info msg="CreateContainer within sandbox \"7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6aaf76fbb52d8ec83865a536a625d4867075425fb4ee110e78a36fa67790da8e\"" Jul 7 06:13:05.576013 containerd[1923]: time="2025-07-07T06:13:05.575981262Z" level=info msg="StartContainer for \"6aaf76fbb52d8ec83865a536a625d4867075425fb4ee110e78a36fa67790da8e\"" Jul 7 06:13:05.576896 containerd[1923]: time="2025-07-07T06:13:05.576884840Z" level=info msg="connecting to shim 6aaf76fbb52d8ec83865a536a625d4867075425fb4ee110e78a36fa67790da8e" address="unix:///run/containerd/s/936c4e0a89cf5c386589a962e463c236bc68d43a42530fc42934fdda1b1e7f5f" protocol=ttrpc version=3 Jul 7 06:13:05.598825 systemd[1]: Started cri-containerd-6aaf76fbb52d8ec83865a536a625d4867075425fb4ee110e78a36fa67790da8e.scope - libcontainer container 6aaf76fbb52d8ec83865a536a625d4867075425fb4ee110e78a36fa67790da8e. Jul 7 06:13:05.618596 containerd[1923]: time="2025-07-07T06:13:05.618550558Z" level=info msg="StartContainer for \"6aaf76fbb52d8ec83865a536a625d4867075425fb4ee110e78a36fa67790da8e\" returns successfully" Jul 7 06:13:05.619124 containerd[1923]: time="2025-07-07T06:13:05.619111854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:13:06.091130 containerd[1923]: time="2025-07-07T06:13:06.091104159Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"905e78d1e3f8c3d81fa48c2675df786060b85f01bf10d35fa022ff3b6d9f0182\" pid:6700 exit_status:1 exited_at:{seconds:1751868786 nanos:90850182}" Jul 7 06:13:07.087103 containerd[1923]: time="2025-07-07T06:13:07.087074522Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"cb8f7282da6253b1be44eb21e0000032174b32ea169b1dcf82bef12df28a8b97\" pid:6777 exit_status:1 exited_at:{seconds:1751868787 nanos:86857865}" Jul 7 06:13:07.100341 kubelet[3281]: I0707 06:13:07.100290 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:13:07.112691 kubelet[3281]: I0707 06:13:07.112591 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-fpfpz" podStartSLOduration=25.954241504 podStartE2EDuration="30.112563316s" podCreationTimestamp="2025-07-07 06:12:37 +0000 UTC" firstStartedPulling="2025-07-07 06:12:59.941198898 +0000 UTC m=+38.168172686" lastFinishedPulling="2025-07-07 06:13:04.099520707 +0000 UTC m=+42.326494498" observedRunningTime="2025-07-07 06:13:05.001960389 +0000 UTC m=+43.228934241" watchObservedRunningTime="2025-07-07 06:13:07.112563316 +0000 UTC m=+45.339537104" Jul 7 06:13:07.212712 containerd[1923]: time="2025-07-07T06:13:07.212683835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:07.212906 containerd[1923]: time="2025-07-07T06:13:07.212892938Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 06:13:07.213224 containerd[1923]: time="2025-07-07T06:13:07.213213614Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:07.214170 containerd[1923]: time="2025-07-07T06:13:07.214159042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:13:07.214398 containerd[1923]: time="2025-07-07T06:13:07.214385619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.595257889s" Jul 7 06:13:07.214424 containerd[1923]: time="2025-07-07T06:13:07.214403799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 06:13:07.215959 containerd[1923]: time="2025-07-07T06:13:07.215947987Z" level=info msg="CreateContainer within sandbox \"7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:13:07.219207 containerd[1923]: time="2025-07-07T06:13:07.219195708Z" level=info msg="Container a211e7fc15915bfe8c90a0bd6cfb2ecd41cfbebb54da3b1b03cc86991088ed41: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:13:07.222545 containerd[1923]: time="2025-07-07T06:13:07.222532830Z" level=info msg="CreateContainer within sandbox \"7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a211e7fc15915bfe8c90a0bd6cfb2ecd41cfbebb54da3b1b03cc86991088ed41\"" Jul 7 06:13:07.222771 containerd[1923]: time="2025-07-07T06:13:07.222760325Z" level=info msg="StartContainer for \"a211e7fc15915bfe8c90a0bd6cfb2ecd41cfbebb54da3b1b03cc86991088ed41\"" Jul 7 06:13:07.223541 containerd[1923]: time="2025-07-07T06:13:07.223530885Z" level=info msg="connecting to shim a211e7fc15915bfe8c90a0bd6cfb2ecd41cfbebb54da3b1b03cc86991088ed41" address="unix:///run/containerd/s/936c4e0a89cf5c386589a962e463c236bc68d43a42530fc42934fdda1b1e7f5f" protocol=ttrpc version=3 Jul 7 06:13:07.245075 systemd[1]: Started cri-containerd-a211e7fc15915bfe8c90a0bd6cfb2ecd41cfbebb54da3b1b03cc86991088ed41.scope - libcontainer container a211e7fc15915bfe8c90a0bd6cfb2ecd41cfbebb54da3b1b03cc86991088ed41. Jul 7 06:13:07.300811 containerd[1923]: time="2025-07-07T06:13:07.300783769Z" level=info msg="StartContainer for \"a211e7fc15915bfe8c90a0bd6cfb2ecd41cfbebb54da3b1b03cc86991088ed41\" returns successfully" Jul 7 06:13:07.673849 kubelet[3281]: I0707 06:13:07.673761 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:13:07.785359 containerd[1923]: time="2025-07-07T06:13:07.785330294Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"ad831ac702f6a271429734bbbfd3f2eeaad5c88ff1fa25ddfba61dd0978d7b68\" pid:6915 exit_status:1 exited_at:{seconds:1751868787 nanos:785083423}" Jul 7 06:13:07.829604 containerd[1923]: time="2025-07-07T06:13:07.829576714Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"a21b1c158fd174634ed328078f81f38f52d87203cd7d13eba1b2e8a0686706dc\" pid:6952 exit_status:1 exited_at:{seconds:1751868787 nanos:829401574}" Jul 7 06:13:07.861303 kubelet[3281]: I0707 06:13:07.861211 3281 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:13:07.861303 kubelet[3281]: I0707 06:13:07.861295 3281 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:13:08.024714 kubelet[3281]: I0707 06:13:08.024580 3281 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rgmw9" podStartSLOduration=23.839625804 podStartE2EDuration="30.024541676s" podCreationTimestamp="2025-07-07 06:12:38 +0000 UTC" firstStartedPulling="2025-07-07 06:13:01.029850718 +0000 UTC m=+39.256824508" lastFinishedPulling="2025-07-07 06:13:07.214766592 +0000 UTC m=+45.441740380" observedRunningTime="2025-07-07 06:13:08.023016639 +0000 UTC m=+46.249990500" watchObservedRunningTime="2025-07-07 06:13:08.024541676 +0000 UTC m=+46.251515517" Jul 7 06:13:08.371472 systemd-networkd[1840]: vxlan.calico: Link UP Jul 7 06:13:08.371481 systemd-networkd[1840]: vxlan.calico: Gained carrier Jul 7 06:13:10.189000 systemd-networkd[1840]: vxlan.calico: Gained IPv6LL Jul 7 06:13:33.055394 containerd[1923]: time="2025-07-07T06:13:33.055331517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"73e090e84934f0c4a7fccf0fdefb7b9c93a9e66ebc7a82872d65ee19b0b178f5\" pid:7162 exited_at:{seconds:1751868813 nanos:55164387}" Jul 7 06:13:37.096582 containerd[1923]: time="2025-07-07T06:13:37.096521544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"fff08315da0b3874193fa0fbcfc0af4422c7fe7c113b79c2f95842305eff37af\" pid:7184 exited_at:{seconds:1751868817 nanos:96328220}" Jul 7 06:13:37.847088 containerd[1923]: time="2025-07-07T06:13:37.847049405Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"0e0748d20c298ce8b4b0dc98f4ab67435fbd71b6f1e5a5c001d33d42ac12a818\" pid:7219 exited_at:{seconds:1751868817 nanos:846841421}" Jul 7 06:13:39.859008 kubelet[3281]: I0707 06:13:39.858974 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:13:42.274922 containerd[1923]: time="2025-07-07T06:13:42.274896085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"8c46eec08068de8fd161273b10204a2a14b7f17e9c6d0d031f7580c2ffdd3293\" pid:7255 exited_at:{seconds:1751868822 nanos:274699148}" Jul 7 06:13:44.718829 containerd[1923]: time="2025-07-07T06:13:44.718775640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"b6b2788357e11365695222e3d127f75de50a7b66cc5381dee9d4e847e9ccdcbe\" pid:7287 exited_at:{seconds:1751868824 nanos:718587114}" Jul 7 06:13:59.818966 systemd[1]: Started sshd@9-145.40.90.175:22-117.198.94.162:55666.service - OpenSSH per-connection server daemon (117.198.94.162:55666). Jul 7 06:14:03.074938 containerd[1923]: time="2025-07-07T06:14:03.074905476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"d38aa321fddf9896d43777fcafbb37393dae630543943e28ae2f9c191e4340c4\" pid:7323 exited_at:{seconds:1751868843 nanos:74743179}" Jul 7 06:14:04.040779 sshd-session[7334]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=117.198.94.162 user=operator Jul 7 06:14:06.127139 sshd[7308]: PAM: Permission denied for operator from 117.198.94.162 Jul 7 06:14:06.701852 sshd[7308]: Connection closed by authenticating user operator 117.198.94.162 port 55666 [preauth] Jul 7 06:14:06.707237 systemd[1]: sshd@9-145.40.90.175:22-117.198.94.162:55666.service: Deactivated successfully. Jul 7 06:14:07.043236 containerd[1923]: time="2025-07-07T06:14:07.043202689Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"9b652eb2c1291205c2482023033d44ea1f52b26159dfbc0b6dc6680567c6f420\" pid:7350 exited_at:{seconds:1751868847 nanos:42978807}" Jul 7 06:14:07.892960 containerd[1923]: time="2025-07-07T06:14:07.892890950Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"c1be9d471ad7d71e2b5072f9cc5ff91f9950727ea2c2933c7aaf32c31d651ef2\" pid:7380 exited_at:{seconds:1751868847 nanos:892618723}" Jul 7 06:14:33.018271 containerd[1923]: time="2025-07-07T06:14:33.018243838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"c8cb99b0cbb85893f6d15ae049e5c269449fa5746d8eb29ea34e41aa3032ee74\" pid:7428 exited_at:{seconds:1751868873 nanos:18123533}" Jul 7 06:14:37.100216 containerd[1923]: time="2025-07-07T06:14:37.100187656Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"04491502cf94eb529e3885bb870824b49933ada2d7c9398ea609f410fc7bb99f\" pid:7450 exited_at:{seconds:1751868877 nanos:99987963}" Jul 7 06:14:37.857204 containerd[1923]: time="2025-07-07T06:14:37.857177143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"274bdc6cf56cbd6c072677cd5d350d7b26a16c424f56a04005f0b800148dc0df\" pid:7484 exited_at:{seconds:1751868877 nanos:856879248}" Jul 7 06:14:42.306694 containerd[1923]: time="2025-07-07T06:14:42.306659506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"4f468e56c9e85f546ef84db807aebdfc556feca2fb5f4ea526730a9ffe67d970\" pid:7525 exited_at:{seconds:1751868882 nanos:306463882}" Jul 7 06:14:44.716370 containerd[1923]: time="2025-07-07T06:14:44.716344724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"dd4d94e4334629fedd86bb047dccd1f4747e7b28fc805a05d6ecd6d3a362593e\" pid:7557 exited_at:{seconds:1751868884 nanos:716223168}" Jul 7 06:14:54.587447 systemd[1]: Started sshd@10-145.40.90.175:22-221.206.42.82:51848.service - OpenSSH per-connection server daemon (221.206.42.82:51848). Jul 7 06:14:57.879494 sshd[7586]: Invalid user admin from 221.206.42.82 port 51848 Jul 7 06:14:58.669329 sshd-session[7588]: pam_faillock(sshd:auth): User unknown Jul 7 06:14:58.672807 sshd[7586]: Postponed keyboard-interactive for invalid user admin from 221.206.42.82 port 51848 ssh2 [preauth] Jul 7 06:14:59.406205 sshd-session[7588]: pam_unix(sshd:auth): check pass; user unknown Jul 7 06:14:59.406269 sshd-session[7588]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=221.206.42.82 Jul 7 06:14:59.407478 sshd-session[7588]: pam_faillock(sshd:auth): User unknown Jul 7 06:15:01.379051 sshd[7586]: PAM: Permission denied for illegal user admin from 221.206.42.82 Jul 7 06:15:01.380109 sshd[7586]: Failed keyboard-interactive/pam for invalid user admin from 221.206.42.82 port 51848 ssh2 Jul 7 06:15:02.021671 sshd[7586]: Connection closed by invalid user admin 221.206.42.82 port 51848 [preauth] Jul 7 06:15:02.026733 systemd[1]: sshd@10-145.40.90.175:22-221.206.42.82:51848.service: Deactivated successfully. Jul 7 06:15:03.035915 containerd[1923]: time="2025-07-07T06:15:03.035865443Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"2f26a1ae67a06dae15608d3d4b1697a950a24e779b6e971719db57752d4669ee\" pid:7604 exited_at:{seconds:1751868903 nanos:35701663}" Jul 7 06:15:07.069731 containerd[1923]: time="2025-07-07T06:15:07.069699881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"3f1ef1f44c99076f926dd6c7d4610904973b36e931c245e31bf0f058ec4f4dde\" pid:7625 exited_at:{seconds:1751868907 nanos:69476263}" Jul 7 06:15:07.848109 containerd[1923]: time="2025-07-07T06:15:07.848050091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"ded1390105f6d7178b1847c2848b348b717ada5a2c972562a98c15fd27a4524d\" pid:7662 exited_at:{seconds:1751868907 nanos:847847175}" Jul 7 06:15:23.098437 systemd[1]: Started sshd@11-145.40.90.175:22-49.124.151.18:41304.service - OpenSSH per-connection server daemon (49.124.151.18:41304). Jul 7 06:15:23.270504 sshd[7689]: Connection closed by 49.124.151.18 port 41304 Jul 7 06:15:23.273412 systemd[1]: sshd@11-145.40.90.175:22-49.124.151.18:41304.service: Deactivated successfully. Jul 7 06:15:33.023470 containerd[1923]: time="2025-07-07T06:15:33.023440386Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"e240aae7674e25ecd34b67c5ccf2531087dead6efce1e534805e3ec1f1703047\" pid:7706 exited_at:{seconds:1751868933 nanos:23294789}" Jul 7 06:15:37.051699 containerd[1923]: time="2025-07-07T06:15:37.051635131Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"9179dc70131664f5bc5743ba0cdd7e854c8254384a3ec4cb551eaa9f6d78c507\" pid:7727 exited_at:{seconds:1751868937 nanos:51412715}" Jul 7 06:15:37.838387 containerd[1923]: time="2025-07-07T06:15:37.838356700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"61173e93996db41bd89e8110897ec12d517c4457523943b6d75d5ab3560a1358\" pid:7761 exited_at:{seconds:1751868937 nanos:838159197}" Jul 7 06:15:42.285653 containerd[1923]: time="2025-07-07T06:15:42.285590849Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"f28854fd8e20c85e47ec8f16ee6f3060a453bd2833930d494ddd8f79214d11e5\" pid:7797 exited_at:{seconds:1751868942 nanos:285332913}" Jul 7 06:15:44.761937 containerd[1923]: time="2025-07-07T06:15:44.761909007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"e347cae35fd5f0f72789eb976faa0ba528a745e9ea0dd4ea659261d3566fb38f\" pid:7830 exited_at:{seconds:1751868944 nanos:761753864}" Jul 7 06:16:03.020850 containerd[1923]: time="2025-07-07T06:16:03.020826158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"244442a596600923487d633354a9a479038e9803458f48fbee23651e7f42f784\" pid:7862 exited_at:{seconds:1751868963 nanos:20650289}" Jul 7 06:16:07.096264 containerd[1923]: time="2025-07-07T06:16:07.096204247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"d1a26cebbfc2da2d6b1f1d820e86719dccfd07da032e1c347cdd925e89b9d6a4\" pid:7885 exited_at:{seconds:1751868967 nanos:95996346}" Jul 7 06:16:07.851251 containerd[1923]: time="2025-07-07T06:16:07.851191343Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"411677b9eef6ca963178ba5698e45425fda8e087028495ffcdb9fc8f2efe120b\" pid:7916 exited_at:{seconds:1751868967 nanos:850979670}" Jul 7 06:16:33.078911 containerd[1923]: time="2025-07-07T06:16:33.078851774Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"760567213b0f8ab7ef59c53e1527de72124d81905d6bd0ace848dc5e67f13e5b\" pid:7980 exited_at:{seconds:1751868993 nanos:78681336}" Jul 7 06:16:37.052193 containerd[1923]: time="2025-07-07T06:16:37.052165723Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"fe0c38071241093224b2635ecab939e2fa872a5c3b68d4837087716478839ee2\" pid:8002 exited_at:{seconds:1751868997 nanos:51902570}" Jul 7 06:16:37.888765 containerd[1923]: time="2025-07-07T06:16:37.888730985Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"e4b776591eb986e9a5f0a3bd53b646c6cb44ec013fabffef0d11f4328c8cd0ed\" pid:8035 exited_at:{seconds:1751868997 nanos:888540369}" Jul 7 06:16:42.308036 containerd[1923]: time="2025-07-07T06:16:42.308007446Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"ce5ece378c20574481e5fef341dd881564c39aee5a7e41648d35d2fc085c71eb\" pid:8072 exited_at:{seconds:1751869002 nanos:307826237}" Jul 7 06:16:44.718719 containerd[1923]: time="2025-07-07T06:16:44.718691664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"b61884a98e766880840403a2a009ea8ed5ff7aa9cb95fe86590eb731d4699bdf\" pid:8105 exited_at:{seconds:1751869004 nanos:718584812}" Jul 7 06:17:03.043245 containerd[1923]: time="2025-07-07T06:17:03.043213502Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"7f3bdaf189bbf84263bfc41263f78dae55ef0dc9898380c716ca7d22fc6ccf1b\" pid:8131 exited_at:{seconds:1751869023 nanos:43104272}" Jul 7 06:17:07.117280 containerd[1923]: time="2025-07-07T06:17:07.117251621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"6fc4642a2b1635e2d32cf919bea3001cdb2ea9a429ef3ae67d5298707ec222e0\" pid:8156 exited_at:{seconds:1751869027 nanos:117028636}" Jul 7 06:17:07.877380 containerd[1923]: time="2025-07-07T06:17:07.877335150Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"cc4cb5c1dee7d497c29a3c87dada3dc980faa93a3ea809658a45086e00949905\" pid:8189 exited_at:{seconds:1751869027 nanos:877091515}" Jul 7 06:17:18.338325 containerd[1923]: time="2025-07-07T06:17:18.338099188Z" level=warning msg="container event discarded" container=7aa20c8fc6e5555823f91243e171480cd9cd78137a27017e1697b2cc9b4656d1 type=CONTAINER_CREATED_EVENT Jul 7 06:17:18.338325 containerd[1923]: time="2025-07-07T06:17:18.338265923Z" level=warning msg="container event discarded" container=7aa20c8fc6e5555823f91243e171480cd9cd78137a27017e1697b2cc9b4656d1 type=CONTAINER_STARTED_EVENT Jul 7 06:17:18.355836 containerd[1923]: time="2025-07-07T06:17:18.355723537Z" level=warning msg="container event discarded" container=407eb190fad2ed4850db795dca0fe7c313338c5eab85a837cf4064c5790fb40b type=CONTAINER_CREATED_EVENT Jul 7 06:17:18.377283 containerd[1923]: time="2025-07-07T06:17:18.377093724Z" level=warning msg="container event discarded" container=31a97b0dcf862c603cfc10b9eada4c5285ffafffe543a8d3a6281046bd0c2618 type=CONTAINER_CREATED_EVENT Jul 7 06:17:18.377283 containerd[1923]: time="2025-07-07T06:17:18.377213060Z" level=warning msg="container event discarded" container=31a97b0dcf862c603cfc10b9eada4c5285ffafffe543a8d3a6281046bd0c2618 type=CONTAINER_STARTED_EVENT Jul 7 06:17:18.377283 containerd[1923]: time="2025-07-07T06:17:18.377242666Z" level=warning msg="container event discarded" container=88d2b614afa91472b18efc37a062875853108680e41537813486e70c673ba822 type=CONTAINER_CREATED_EVENT Jul 7 06:17:18.377283 containerd[1923]: time="2025-07-07T06:17:18.377267160Z" level=warning msg="container event discarded" container=88d2b614afa91472b18efc37a062875853108680e41537813486e70c673ba822 type=CONTAINER_STARTED_EVENT Jul 7 06:17:18.377283 containerd[1923]: time="2025-07-07T06:17:18.377289116Z" level=warning msg="container event discarded" container=7e987ab7d5c1b9c85935d7f17282b155b67b763281a56cd2bd6de6e300cb2943 type=CONTAINER_CREATED_EVENT Jul 7 06:17:18.377283 containerd[1923]: time="2025-07-07T06:17:18.377308915Z" level=warning msg="container event discarded" container=7135bef96cda06cd4406bfc9921fd1ca0a2cbfd43fbb5f0632846c33fb38658b type=CONTAINER_CREATED_EVENT Jul 7 06:17:18.404902 containerd[1923]: time="2025-07-07T06:17:18.404762213Z" level=warning msg="container event discarded" container=407eb190fad2ed4850db795dca0fe7c313338c5eab85a837cf4064c5790fb40b type=CONTAINER_STARTED_EVENT Jul 7 06:17:18.432339 containerd[1923]: time="2025-07-07T06:17:18.432187843Z" level=warning msg="container event discarded" container=7e987ab7d5c1b9c85935d7f17282b155b67b763281a56cd2bd6de6e300cb2943 type=CONTAINER_STARTED_EVENT Jul 7 06:17:18.432339 containerd[1923]: time="2025-07-07T06:17:18.432282404Z" level=warning msg="container event discarded" container=7135bef96cda06cd4406bfc9921fd1ca0a2cbfd43fbb5f0632846c33fb38658b type=CONTAINER_STARTED_EVENT Jul 7 06:17:25.354626 systemd[1]: Started sshd@12-145.40.90.175:22-189.218.168.30:53620.service - OpenSSH per-connection server daemon (189.218.168.30:53620). Jul 7 06:17:27.081832 sshd[8216]: Invalid user supervisor from 189.218.168.30 port 53620 Jul 7 06:17:27.538472 sshd-session[8218]: pam_faillock(sshd:auth): User unknown Jul 7 06:17:27.541856 sshd[8216]: Postponed keyboard-interactive for invalid user supervisor from 189.218.168.30 port 53620 ssh2 [preauth] Jul 7 06:17:27.894557 sshd-session[8218]: pam_unix(sshd:auth): check pass; user unknown Jul 7 06:17:27.894614 sshd-session[8218]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=189.218.168.30 Jul 7 06:17:27.895790 sshd-session[8218]: pam_faillock(sshd:auth): User unknown Jul 7 06:17:28.947735 containerd[1923]: time="2025-07-07T06:17:28.947485954Z" level=warning msg="container event discarded" container=e49b732e827cd853090a2dac31a0732d7b4b87eeebe155609218830afa9312a1 type=CONTAINER_CREATED_EVENT Jul 7 06:17:28.947735 containerd[1923]: time="2025-07-07T06:17:28.947698437Z" level=warning msg="container event discarded" container=e49b732e827cd853090a2dac31a0732d7b4b87eeebe155609218830afa9312a1 type=CONTAINER_STARTED_EVENT Jul 7 06:17:28.947735 containerd[1923]: time="2025-07-07T06:17:28.947734075Z" level=warning msg="container event discarded" container=e8a6c1565d8b10e52a90b574606ee7d98914b5211e1797c6d8229638614c3bf8 type=CONTAINER_CREATED_EVENT Jul 7 06:17:28.977106 containerd[1923]: time="2025-07-07T06:17:28.976967133Z" level=warning msg="container event discarded" container=49b971690e245cff42aeaa48d160e3e595e1ecbe4c7fc95cc478c20ae9579254 type=CONTAINER_CREATED_EVENT Jul 7 06:17:28.977106 containerd[1923]: time="2025-07-07T06:17:28.977052386Z" level=warning msg="container event discarded" container=49b971690e245cff42aeaa48d160e3e595e1ecbe4c7fc95cc478c20ae9579254 type=CONTAINER_STARTED_EVENT Jul 7 06:17:28.995450 containerd[1923]: time="2025-07-07T06:17:28.995284853Z" level=warning msg="container event discarded" container=e8a6c1565d8b10e52a90b574606ee7d98914b5211e1797c6d8229638614c3bf8 type=CONTAINER_STARTED_EVENT Jul 7 06:17:29.987064 sshd[8216]: PAM: Permission denied for illegal user supervisor from 189.218.168.30 Jul 7 06:17:29.987935 sshd[8216]: Failed keyboard-interactive/pam for invalid user supervisor from 189.218.168.30 port 53620 ssh2 Jul 7 06:17:30.357090 sshd[8216]: Connection closed by invalid user supervisor 189.218.168.30 port 53620 [preauth] Jul 7 06:17:30.362028 systemd[1]: sshd@12-145.40.90.175:22-189.218.168.30:53620.service: Deactivated successfully. Jul 7 06:17:30.573308 containerd[1923]: time="2025-07-07T06:17:30.573168399Z" level=warning msg="container event discarded" container=5319372463d6abe1d42a4b0ba2013dfa0a5ef213bcf76313301ebb26e55b41ba type=CONTAINER_CREATED_EVENT Jul 7 06:17:30.606483 containerd[1923]: time="2025-07-07T06:17:30.606465905Z" level=warning msg="container event discarded" container=5319372463d6abe1d42a4b0ba2013dfa0a5ef213bcf76313301ebb26e55b41ba type=CONTAINER_STARTED_EVENT Jul 7 06:17:33.070992 containerd[1923]: time="2025-07-07T06:17:33.070962039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"326b5009b7320fcea68142180d610d06ba509dabb3d4afc6b82203788f44af57\" pid:8235 exited_at:{seconds:1751869053 nanos:70821224}" Jul 7 06:17:37.059529 containerd[1923]: time="2025-07-07T06:17:37.059501321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"ea128c2e70c18747b7b09d23015228063c331cf4f7af113a01dd4c4d5a127eaa\" pid:8256 exited_at:{seconds:1751869057 nanos:59289629}" Jul 7 06:17:37.846495 containerd[1923]: time="2025-07-07T06:17:37.846467318Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"362641011e3d9cff48636e0d18b7495aa8ebcd8ef2984031966b5d76f3b3840e\" pid:8288 exited_at:{seconds:1751869057 nanos:846261544}" Jul 7 06:17:37.957812 containerd[1923]: time="2025-07-07T06:17:37.957553517Z" level=warning msg="container event discarded" container=bc21650a42388431ae0593d0b9b2819a46b2514443a1380cb90d983da69ced77 type=CONTAINER_CREATED_EVENT Jul 7 06:17:37.957812 containerd[1923]: time="2025-07-07T06:17:37.957745739Z" level=warning msg="container event discarded" container=bc21650a42388431ae0593d0b9b2819a46b2514443a1380cb90d983da69ced77 type=CONTAINER_STARTED_EVENT Jul 7 06:17:38.266628 containerd[1923]: time="2025-07-07T06:17:38.266370383Z" level=warning msg="container event discarded" container=8522e6653572dfb4222a4a6a74dae9b922f4685b713fb033c4bf5a752c32386a type=CONTAINER_CREATED_EVENT Jul 7 06:17:38.266628 containerd[1923]: time="2025-07-07T06:17:38.266500536Z" level=warning msg="container event discarded" container=8522e6653572dfb4222a4a6a74dae9b922f4685b713fb033c4bf5a752c32386a type=CONTAINER_STARTED_EVENT Jul 7 06:17:40.044516 containerd[1923]: time="2025-07-07T06:17:40.044365543Z" level=warning msg="container event discarded" container=1c312b167ae2fb9951f06ac75b1dc9cab01c51428a31c9c3edd30468496b1a05 type=CONTAINER_CREATED_EVENT Jul 7 06:17:40.153028 containerd[1923]: time="2025-07-07T06:17:40.152861463Z" level=warning msg="container event discarded" container=1c312b167ae2fb9951f06ac75b1dc9cab01c51428a31c9c3edd30468496b1a05 type=CONTAINER_STARTED_EVENT Jul 7 06:17:41.474915 containerd[1923]: time="2025-07-07T06:17:41.474725105Z" level=warning msg="container event discarded" container=3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241 type=CONTAINER_CREATED_EVENT Jul 7 06:17:41.522302 containerd[1923]: time="2025-07-07T06:17:41.522164033Z" level=warning msg="container event discarded" container=3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241 type=CONTAINER_STARTED_EVENT Jul 7 06:17:42.318748 containerd[1923]: time="2025-07-07T06:17:42.318691595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"82c9d4b8720b39f608686692d50451a586f5244e0a12d045c06181552248f907\" pid:8322 exited_at:{seconds:1751869062 nanos:318502261}" Jul 7 06:17:42.494886 containerd[1923]: time="2025-07-07T06:17:42.494731774Z" level=warning msg="container event discarded" container=3a68434bcb3ae98a5ef1793149116c39179469141a5410475b2ab8e8a6f99241 type=CONTAINER_STOPPED_EVENT Jul 7 06:17:44.729863 containerd[1923]: time="2025-07-07T06:17:44.729840127Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"4297252dd0dd265d4d65f13e697298f46ca69ff5400049b5aaf515ed765f1490\" pid:8354 exited_at:{seconds:1751869064 nanos:729728005}" Jul 7 06:17:45.264052 containerd[1923]: time="2025-07-07T06:17:45.263870732Z" level=warning msg="container event discarded" container=a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b type=CONTAINER_CREATED_EVENT Jul 7 06:17:45.307495 containerd[1923]: time="2025-07-07T06:17:45.307352399Z" level=warning msg="container event discarded" container=a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b type=CONTAINER_STARTED_EVENT Jul 7 06:17:46.206005 containerd[1923]: time="2025-07-07T06:17:46.205895309Z" level=warning msg="container event discarded" container=a2bcfef3aafd7b854b3643f5ab2a1a4d314f71061d69683685e16bf8b6d5663b type=CONTAINER_STOPPED_EVENT Jul 7 06:17:50.357766 containerd[1923]: time="2025-07-07T06:17:50.357491268Z" level=warning msg="container event discarded" container=caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f type=CONTAINER_CREATED_EVENT Jul 7 06:17:50.394024 containerd[1923]: time="2025-07-07T06:17:50.393883903Z" level=warning msg="container event discarded" container=caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f type=CONTAINER_STARTED_EVENT Jul 7 06:17:51.463521 containerd[1923]: time="2025-07-07T06:17:51.463362190Z" level=warning msg="container event discarded" container=5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c type=CONTAINER_CREATED_EVENT Jul 7 06:17:51.463521 containerd[1923]: time="2025-07-07T06:17:51.463480730Z" level=warning msg="container event discarded" container=5d7130b21a8bfaa18652baebaeb89f972b3394faf5eb885f09c64c1d4c3ac86c type=CONTAINER_STARTED_EVENT Jul 7 06:17:52.931866 containerd[1923]: time="2025-07-07T06:17:52.931685861Z" level=warning msg="container event discarded" container=c7655120ab8e8f32b8869acc3da335c406b987dc8dd82ac9b0729653bfec20af type=CONTAINER_CREATED_EVENT Jul 7 06:17:52.992976 containerd[1923]: time="2025-07-07T06:17:52.992897197Z" level=warning msg="container event discarded" container=c7655120ab8e8f32b8869acc3da335c406b987dc8dd82ac9b0729653bfec20af type=CONTAINER_STARTED_EVENT Jul 7 06:17:54.850934 containerd[1923]: time="2025-07-07T06:17:54.850789239Z" level=warning msg="container event discarded" container=791a5abeb81a894e1206d8e7387b5e73c667c4d581302c402ea1f1aa6bb4fb9d type=CONTAINER_CREATED_EVENT Jul 7 06:17:54.975825 containerd[1923]: time="2025-07-07T06:17:54.975680775Z" level=warning msg="container event discarded" container=791a5abeb81a894e1206d8e7387b5e73c667c4d581302c402ea1f1aa6bb4fb9d type=CONTAINER_STARTED_EVENT Jul 7 06:17:57.981792 containerd[1923]: time="2025-07-07T06:17:57.981626304Z" level=warning msg="container event discarded" container=2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1 type=CONTAINER_CREATED_EVENT Jul 7 06:17:57.981792 containerd[1923]: time="2025-07-07T06:17:57.981741626Z" level=warning msg="container event discarded" container=2c1f609b05f2cc146144cf7ee2d1deb179036e57a2bbcbb46acca0582c7895b1 type=CONTAINER_STARTED_EVENT Jul 7 06:17:59.007600 containerd[1923]: time="2025-07-07T06:17:59.007442957Z" level=warning msg="container event discarded" container=c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33 type=CONTAINER_CREATED_EVENT Jul 7 06:17:59.007600 containerd[1923]: time="2025-07-07T06:17:59.007537581Z" level=warning msg="container event discarded" container=c32d3d38a1a69ee6343614ca14233a0a8d3d3aed4cb5d3b3326ba702134c6f33 type=CONTAINER_STARTED_EVENT Jul 7 06:17:59.007600 containerd[1923]: time="2025-07-07T06:17:59.007564864Z" level=warning msg="container event discarded" container=72e7d0633e7cdae365f08679c20012fd1656db4b0ce9b949e5ee867c36757eb0 type=CONTAINER_CREATED_EVENT Jul 7 06:17:59.053995 containerd[1923]: time="2025-07-07T06:17:59.053878177Z" level=warning msg="container event discarded" container=72e7d0633e7cdae365f08679c20012fd1656db4b0ce9b949e5ee867c36757eb0 type=CONTAINER_STARTED_EVENT Jul 7 06:17:59.070383 containerd[1923]: time="2025-07-07T06:17:59.070259569Z" level=warning msg="container event discarded" container=6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f type=CONTAINER_CREATED_EVENT Jul 7 06:17:59.070383 containerd[1923]: time="2025-07-07T06:17:59.070326107Z" level=warning msg="container event discarded" container=6861de34257354e7fffb304a4b688a5c6b6e997bb0fa284cb5eee68654a1211f type=CONTAINER_STARTED_EVENT Jul 7 06:17:59.951320 containerd[1923]: time="2025-07-07T06:17:59.951159686Z" level=warning msg="container event discarded" container=50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5 type=CONTAINER_CREATED_EVENT Jul 7 06:17:59.951320 containerd[1923]: time="2025-07-07T06:17:59.951259550Z" level=warning msg="container event discarded" container=50f939e2dc902048b1e53916c7830b070e6b9b589bd408fe157a191d70845ee5 type=CONTAINER_STARTED_EVENT Jul 7 06:17:59.951320 containerd[1923]: time="2025-07-07T06:17:59.951288222Z" level=warning msg="container event discarded" container=a66c1565f31be9733095dddc0027d0ba5e3fb1c6eae6b51fe8b598d8e8abafc0 type=CONTAINER_CREATED_EVENT Jul 7 06:18:00.002864 containerd[1923]: time="2025-07-07T06:18:00.002719571Z" level=warning msg="container event discarded" container=a66c1565f31be9733095dddc0027d0ba5e3fb1c6eae6b51fe8b598d8e8abafc0 type=CONTAINER_STARTED_EVENT Jul 7 06:18:00.967591 containerd[1923]: time="2025-07-07T06:18:00.967505936Z" level=warning msg="container event discarded" container=bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc type=CONTAINER_CREATED_EVENT Jul 7 06:18:00.967591 containerd[1923]: time="2025-07-07T06:18:00.967560716Z" level=warning msg="container event discarded" container=bc64ac6bb280e765765ad613d7b6821dc580cea06ff5c320147a7d8159bbdcdc type=CONTAINER_STARTED_EVENT Jul 7 06:18:00.967591 containerd[1923]: time="2025-07-07T06:18:00.967572482Z" level=warning msg="container event discarded" container=270340ecf5eeb81fe55c84598a82795cb4eea03350edc7e30f3ba236ad203372 type=CONTAINER_CREATED_EVENT Jul 7 06:18:01.005056 containerd[1923]: time="2025-07-07T06:18:01.004895786Z" level=warning msg="container event discarded" container=270340ecf5eeb81fe55c84598a82795cb4eea03350edc7e30f3ba236ad203372 type=CONTAINER_STARTED_EVENT Jul 7 06:18:01.040523 containerd[1923]: time="2025-07-07T06:18:01.040359055Z" level=warning msg="container event discarded" container=7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c type=CONTAINER_CREATED_EVENT Jul 7 06:18:01.040523 containerd[1923]: time="2025-07-07T06:18:01.040463043Z" level=warning msg="container event discarded" container=7a38225a2ff78ec86e7fab358e08af3de97fb949104a1c5d0f6b8c136ca4f64c type=CONTAINER_STARTED_EVENT Jul 7 06:18:01.944005 containerd[1923]: time="2025-07-07T06:18:01.943871743Z" level=warning msg="container event discarded" container=443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc type=CONTAINER_CREATED_EVENT Jul 7 06:18:01.962398 containerd[1923]: time="2025-07-07T06:18:01.962257876Z" level=warning msg="container event discarded" container=f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767 type=CONTAINER_CREATED_EVENT Jul 7 06:18:01.962398 containerd[1923]: time="2025-07-07T06:18:01.962337864Z" level=warning msg="container event discarded" container=f14ed8c7dcc13920574d56413daf28a5a4a6b5b59941482b862ab9e2b4ecf767 type=CONTAINER_STARTED_EVENT Jul 7 06:18:01.962398 containerd[1923]: time="2025-07-07T06:18:01.962365740Z" level=warning msg="container event discarded" container=1f4322b9336cb5bcddbbeef494e3941ef5d1a01e2b40a8a0cc830c4d418a70ff type=CONTAINER_CREATED_EVENT Jul 7 06:18:01.987852 containerd[1923]: time="2025-07-07T06:18:01.987730343Z" level=warning msg="container event discarded" container=443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc type=CONTAINER_STARTED_EVENT Jul 7 06:18:02.018232 containerd[1923]: time="2025-07-07T06:18:02.018094770Z" level=warning msg="container event discarded" container=1f4322b9336cb5bcddbbeef494e3941ef5d1a01e2b40a8a0cc830c4d418a70ff type=CONTAINER_STARTED_EVENT Jul 7 06:18:03.033446 containerd[1923]: time="2025-07-07T06:18:03.033407209Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"d3dc23a03fc92bdb42a8d5d468451f5a1374a0a26d42b3a92f2fad6550c84899\" pid:8403 exited_at:{seconds:1751869083 nanos:33195350}" Jul 7 06:18:04.116065 containerd[1923]: time="2025-07-07T06:18:04.115904305Z" level=warning msg="container event discarded" container=006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47 type=CONTAINER_CREATED_EVENT Jul 7 06:18:04.164505 containerd[1923]: time="2025-07-07T06:18:04.164357268Z" level=warning msg="container event discarded" container=006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47 type=CONTAINER_STARTED_EVENT Jul 7 06:18:05.586401 containerd[1923]: time="2025-07-07T06:18:05.586234141Z" level=warning msg="container event discarded" container=6aaf76fbb52d8ec83865a536a625d4867075425fb4ee110e78a36fa67790da8e type=CONTAINER_CREATED_EVENT Jul 7 06:18:05.627892 containerd[1923]: time="2025-07-07T06:18:05.627741997Z" level=warning msg="container event discarded" container=6aaf76fbb52d8ec83865a536a625d4867075425fb4ee110e78a36fa67790da8e type=CONTAINER_STARTED_EVENT Jul 7 06:18:07.108515 containerd[1923]: time="2025-07-07T06:18:07.108488148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"aa70d48678721c7ab1a2f6629dcab5227afd6b4599b503ac590f1d90bd7d3727\" pid:8426 exited_at:{seconds:1751869087 nanos:108291955}" Jul 7 06:18:07.232636 containerd[1923]: time="2025-07-07T06:18:07.232472939Z" level=warning msg="container event discarded" container=a211e7fc15915bfe8c90a0bd6cfb2ecd41cfbebb54da3b1b03cc86991088ed41 type=CONTAINER_CREATED_EVENT Jul 7 06:18:07.310214 containerd[1923]: time="2025-07-07T06:18:07.310077262Z" level=warning msg="container event discarded" container=a211e7fc15915bfe8c90a0bd6cfb2ecd41cfbebb54da3b1b03cc86991088ed41 type=CONTAINER_STARTED_EVENT Jul 7 06:18:07.854281 containerd[1923]: time="2025-07-07T06:18:07.854251141Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"c99c0350a48f11ca11f898242f4482accf5d0185f6dfbce7f2bde9bf323ad720\" pid:8461 exited_at:{seconds:1751869087 nanos:853844040}" Jul 7 06:18:33.072935 containerd[1923]: time="2025-07-07T06:18:33.072906567Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"26956933b4a546c8fe8e1f9e762f54f88edd8fe347adb1bea429e751e6ca0a2c\" pid:8508 exited_at:{seconds:1751869113 nanos:72707745}" Jul 7 06:18:37.081280 containerd[1923]: time="2025-07-07T06:18:37.081249524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"a43b6983376321781419272266b8ccb7966ad7128bb559c1973119c2b0b75524\" pid:8530 exited_at:{seconds:1751869117 nanos:81063285}" Jul 7 06:18:37.858338 containerd[1923]: time="2025-07-07T06:18:37.858314136Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"4563d576a25ada25dc28f708412bca971d633fff11ce979d13729b3aa202d5dc\" pid:8562 exited_at:{seconds:1751869117 nanos:858132944}" Jul 7 06:18:42.304299 containerd[1923]: time="2025-07-07T06:18:42.304271521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"eb87f6c24696f138fc00c9f2660b2534113274e0425185d6ee2b6b25a3b925ec\" pid:8596 exited_at:{seconds:1751869122 nanos:304059181}" Jul 7 06:18:44.714047 containerd[1923]: time="2025-07-07T06:18:44.714024029Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"1a0a20f9b1a73cc9f1e1d4d9c4d4dcc0ddf3e43e434381984b62d2ea8b39a8d3\" pid:8629 exited_at:{seconds:1751869124 nanos:713931489}" Jul 7 06:18:58.622080 systemd[1]: Started sshd@13-145.40.90.175:22-147.75.109.163:33256.service - OpenSSH per-connection server daemon (147.75.109.163:33256). Jul 7 06:18:58.658371 sshd[8643]: Accepted publickey for core from 147.75.109.163 port 33256 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:18:58.659386 sshd-session[8643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:18:58.663642 systemd-logind[1913]: New session 12 of user core. Jul 7 06:18:58.677873 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:18:58.840501 sshd[8645]: Connection closed by 147.75.109.163 port 33256 Jul 7 06:18:58.840689 sshd-session[8643]: pam_unix(sshd:session): session closed for user core Jul 7 06:18:58.842473 systemd[1]: sshd@13-145.40.90.175:22-147.75.109.163:33256.service: Deactivated successfully. Jul 7 06:18:58.843476 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:18:58.844197 systemd-logind[1913]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:18:58.844840 systemd-logind[1913]: Removed session 12. Jul 7 06:19:01.937226 systemd[1]: Started sshd@14-145.40.90.175:22-39.152.196.78:35349.service - OpenSSH per-connection server daemon (39.152.196.78:35349). Jul 7 06:19:03.019088 containerd[1923]: time="2025-07-07T06:19:03.019065424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"a9c6121af8478708e082571cbc2ae1b9362cfae9883a590821290609b4ae0715\" pid:8688 exited_at:{seconds:1751869143 nanos:18937403}" Jul 7 06:19:03.869473 systemd[1]: Started sshd@15-145.40.90.175:22-147.75.109.163:33262.service - OpenSSH per-connection server daemon (147.75.109.163:33262). Jul 7 06:19:03.944112 sshd[8699]: Accepted publickey for core from 147.75.109.163 port 33262 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:19:03.945117 sshd-session[8699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:03.948719 systemd-logind[1913]: New session 13 of user core. Jul 7 06:19:03.964776 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:19:04.057713 sshd[8701]: Connection closed by 147.75.109.163 port 33262 Jul 7 06:19:04.057947 sshd-session[8699]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:04.061079 systemd[1]: sshd@15-145.40.90.175:22-147.75.109.163:33262.service: Deactivated successfully. Jul 7 06:19:04.063158 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:19:04.063854 systemd-logind[1913]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:19:04.064591 systemd-logind[1913]: Removed session 13. Jul 7 06:19:05.222593 sshd[8675]: Invalid user centos from 39.152.196.78 port 35349 Jul 7 06:19:05.992853 sshd-session[8728]: pam_faillock(sshd:auth): User unknown Jul 7 06:19:05.996882 sshd[8675]: Postponed keyboard-interactive for invalid user centos from 39.152.196.78 port 35349 ssh2 [preauth] Jul 7 06:19:06.810615 sshd-session[8728]: pam_unix(sshd:auth): check pass; user unknown Jul 7 06:19:06.810631 sshd-session[8728]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=39.152.196.78 Jul 7 06:19:06.810957 sshd-session[8728]: pam_faillock(sshd:auth): User unknown Jul 7 06:19:07.095835 containerd[1923]: time="2025-07-07T06:19:07.095737412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"5939c8906023cd98591d7142e5eca97a6660d02b6d5c957e28328849e8f2f6c9\" pid:8741 exited_at:{seconds:1751869147 nanos:95531244}" Jul 7 06:19:07.903561 containerd[1923]: time="2025-07-07T06:19:07.903534379Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"b0f4c6e703ba786819ed30cd5aa6c84bbb49a62acaea4d68118c77115eed6868\" pid:8774 exited_at:{seconds:1751869147 nanos:903345861}" Jul 7 06:19:08.354842 sshd[8675]: PAM: Permission denied for illegal user centos from 39.152.196.78 Jul 7 06:19:08.355093 sshd[8675]: Failed keyboard-interactive/pam for invalid user centos from 39.152.196.78 port 35349 ssh2 Jul 7 06:19:09.016243 sshd[8675]: Connection closed by invalid user centos 39.152.196.78 port 35349 [preauth] Jul 7 06:19:09.018327 systemd[1]: sshd@14-145.40.90.175:22-39.152.196.78:35349.service: Deactivated successfully. Jul 7 06:19:09.084349 systemd[1]: Started sshd@16-145.40.90.175:22-147.75.109.163:34420.service - OpenSSH per-connection server daemon (147.75.109.163:34420). Jul 7 06:19:09.126964 sshd[8800]: Accepted publickey for core from 147.75.109.163 port 34420 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:19:09.127671 sshd-session[8800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:09.130567 systemd-logind[1913]: New session 14 of user core. Jul 7 06:19:09.140905 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:19:09.229236 sshd[8802]: Connection closed by 147.75.109.163 port 34420 Jul 7 06:19:09.229417 sshd-session[8800]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:09.244600 systemd[1]: sshd@16-145.40.90.175:22-147.75.109.163:34420.service: Deactivated successfully. Jul 7 06:19:09.245423 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:19:09.245895 systemd-logind[1913]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:19:09.247061 systemd[1]: Started sshd@17-145.40.90.175:22-147.75.109.163:34432.service - OpenSSH per-connection server daemon (147.75.109.163:34432). Jul 7 06:19:09.247421 systemd-logind[1913]: Removed session 14. Jul 7 06:19:09.277131 sshd[8828]: Accepted publickey for core from 147.75.109.163 port 34432 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:19:09.277897 sshd-session[8828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:09.281044 systemd-logind[1913]: New session 15 of user core. Jul 7 06:19:09.290831 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:19:09.432855 sshd[8830]: Connection closed by 147.75.109.163 port 34432 Jul 7 06:19:09.433057 sshd-session[8828]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:09.448710 systemd[1]: sshd@17-145.40.90.175:22-147.75.109.163:34432.service: Deactivated successfully. Jul 7 06:19:09.449608 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:19:09.450064 systemd-logind[1913]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:19:09.451239 systemd[1]: Started sshd@18-145.40.90.175:22-147.75.109.163:34436.service - OpenSSH per-connection server daemon (147.75.109.163:34436). Jul 7 06:19:09.451635 systemd-logind[1913]: Removed session 15. Jul 7 06:19:09.480414 sshd[8853]: Accepted publickey for core from 147.75.109.163 port 34436 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:19:09.481172 sshd-session[8853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:09.483996 systemd-logind[1913]: New session 16 of user core. Jul 7 06:19:09.503871 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:19:09.593003 sshd[8856]: Connection closed by 147.75.109.163 port 34436 Jul 7 06:19:09.593141 sshd-session[8853]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:09.594997 systemd[1]: sshd@18-145.40.90.175:22-147.75.109.163:34436.service: Deactivated successfully. Jul 7 06:19:09.596128 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:19:09.596977 systemd-logind[1913]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:19:09.597659 systemd-logind[1913]: Removed session 16. Jul 7 06:19:14.622793 systemd[1]: Started sshd@19-145.40.90.175:22-147.75.109.163:34448.service - OpenSSH per-connection server daemon (147.75.109.163:34448). Jul 7 06:19:14.665083 sshd[8887]: Accepted publickey for core from 147.75.109.163 port 34448 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:19:14.665775 sshd-session[8887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:14.668715 systemd-logind[1913]: New session 17 of user core. Jul 7 06:19:14.683836 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:19:14.775303 sshd[8889]: Connection closed by 147.75.109.163 port 34448 Jul 7 06:19:14.775476 sshd-session[8887]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:14.777265 systemd[1]: sshd@19-145.40.90.175:22-147.75.109.163:34448.service: Deactivated successfully. Jul 7 06:19:14.778248 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:19:14.778997 systemd-logind[1913]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:19:14.779725 systemd-logind[1913]: Removed session 17. Jul 7 06:19:19.792435 systemd[1]: Started sshd@20-145.40.90.175:22-147.75.109.163:37772.service - OpenSSH per-connection server daemon (147.75.109.163:37772). Jul 7 06:19:19.838889 sshd[8915]: Accepted publickey for core from 147.75.109.163 port 37772 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:19:19.842095 sshd-session[8915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:19.854809 systemd-logind[1913]: New session 18 of user core. Jul 7 06:19:19.867131 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:19:20.026266 sshd[8917]: Connection closed by 147.75.109.163 port 37772 Jul 7 06:19:20.026428 sshd-session[8915]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:20.028572 systemd[1]: sshd@20-145.40.90.175:22-147.75.109.163:37772.service: Deactivated successfully. Jul 7 06:19:20.029564 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:19:20.030069 systemd-logind[1913]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:19:20.030778 systemd-logind[1913]: Removed session 18. Jul 7 06:19:25.042432 systemd[1]: Started sshd@21-145.40.90.175:22-147.75.109.163:37780.service - OpenSSH per-connection server daemon (147.75.109.163:37780). Jul 7 06:19:25.091702 sshd[8953]: Accepted publickey for core from 147.75.109.163 port 37780 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:19:25.095101 sshd-session[8953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:25.107687 systemd-logind[1913]: New session 19 of user core. Jul 7 06:19:25.123099 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:19:25.276209 sshd[8955]: Connection closed by 147.75.109.163 port 37780 Jul 7 06:19:25.276387 sshd-session[8953]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:25.277986 systemd[1]: sshd@21-145.40.90.175:22-147.75.109.163:37780.service: Deactivated successfully. Jul 7 06:19:25.278937 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:19:25.279824 systemd-logind[1913]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:19:25.280350 systemd-logind[1913]: Removed session 19. Jul 7 06:19:30.294662 systemd[1]: Started sshd@22-145.40.90.175:22-147.75.109.163:59670.service - OpenSSH per-connection server daemon (147.75.109.163:59670). Jul 7 06:19:30.326182 sshd[8983]: Accepted publickey for core from 147.75.109.163 port 59670 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:19:30.326973 sshd-session[8983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:30.329485 systemd-logind[1913]: New session 20 of user core. Jul 7 06:19:30.340117 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:19:30.453020 sshd[8985]: Connection closed by 147.75.109.163 port 59670 Jul 7 06:19:30.453269 sshd-session[8983]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:30.477655 systemd[1]: sshd@22-145.40.90.175:22-147.75.109.163:59670.service: Deactivated successfully. Jul 7 06:19:30.478981 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:19:30.479632 systemd-logind[1913]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:19:30.481615 systemd[1]: Started sshd@23-145.40.90.175:22-147.75.109.163:59680.service - OpenSSH per-connection server daemon (147.75.109.163:59680). Jul 7 06:19:30.482329 systemd-logind[1913]: Removed session 20. Jul 7 06:19:30.553364 sshd[9011]: Accepted publickey for core from 147.75.109.163 port 59680 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:19:30.557091 sshd-session[9011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:30.570016 systemd-logind[1913]: New session 21 of user core. Jul 7 06:19:30.594163 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:19:30.828259 sshd[9013]: Connection closed by 147.75.109.163 port 59680 Jul 7 06:19:30.828575 sshd-session[9011]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:30.857128 systemd[1]: sshd@23-145.40.90.175:22-147.75.109.163:59680.service: Deactivated successfully. Jul 7 06:19:30.860637 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:19:30.862682 systemd-logind[1913]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:19:30.868110 systemd[1]: Started sshd@24-145.40.90.175:22-147.75.109.163:59682.service - OpenSSH per-connection server daemon (147.75.109.163:59682). Jul 7 06:19:30.869869 systemd-logind[1913]: Removed session 21. Jul 7 06:19:30.944026 sshd[9036]: Accepted publickey for core from 147.75.109.163 port 59682 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:19:30.945111 sshd-session[9036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:30.949533 systemd-logind[1913]: New session 22 of user core. Jul 7 06:19:30.971098 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:19:31.849188 sshd[9039]: Connection closed by 147.75.109.163 port 59682 Jul 7 06:19:31.849395 sshd-session[9036]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:31.876955 systemd[1]: sshd@24-145.40.90.175:22-147.75.109.163:59682.service: Deactivated successfully. Jul 7 06:19:31.881246 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:19:31.883629 systemd-logind[1913]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:19:31.890178 systemd[1]: Started sshd@25-145.40.90.175:22-147.75.109.163:59686.service - OpenSSH per-connection server daemon (147.75.109.163:59686). Jul 7 06:19:31.892178 systemd-logind[1913]: Removed session 22. Jul 7 06:19:31.952236 sshd[9069]: Accepted publickey for core from 147.75.109.163 port 59686 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:19:31.953074 sshd-session[9069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:31.956360 systemd-logind[1913]: New session 23 of user core. Jul 7 06:19:31.977122 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:19:32.161778 sshd[9072]: Connection closed by 147.75.109.163 port 59686 Jul 7 06:19:32.161959 sshd-session[9069]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:32.178021 systemd[1]: sshd@25-145.40.90.175:22-147.75.109.163:59686.service: Deactivated successfully. Jul 7 06:19:32.179030 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:19:32.179501 systemd-logind[1913]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:19:32.181052 systemd[1]: Started sshd@26-145.40.90.175:22-147.75.109.163:59696.service - OpenSSH per-connection server daemon (147.75.109.163:59696). Jul 7 06:19:32.181676 systemd-logind[1913]: Removed session 23. Jul 7 06:19:32.258575 sshd[9096]: Accepted publickey for core from 147.75.109.163 port 59696 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:19:32.259508 sshd-session[9096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:32.262691 systemd-logind[1913]: New session 24 of user core. Jul 7 06:19:32.272813 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 06:19:32.397552 sshd[9098]: Connection closed by 147.75.109.163 port 59696 Jul 7 06:19:32.397749 sshd-session[9096]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:32.399406 systemd[1]: sshd@26-145.40.90.175:22-147.75.109.163:59696.service: Deactivated successfully. Jul 7 06:19:32.400399 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 06:19:32.401121 systemd-logind[1913]: Session 24 logged out. Waiting for processes to exit. Jul 7 06:19:32.401692 systemd-logind[1913]: Removed session 24. Jul 7 06:19:33.051573 containerd[1923]: time="2025-07-07T06:19:33.051544559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443fab478f205ceba26534fe069aa4bff6d6dc246e3cd01e9db47e13c9e309fc\" id:\"e0c560c230c40fcfeb9036759c09c32dc6279a51cac0d9ad189838ed8016c39d\" pid:9149 exited_at:{seconds:1751869173 nanos:51388726}" Jul 7 06:19:37.084441 containerd[1923]: time="2025-07-07T06:19:37.084417845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"de6c0e5d30bc89fbfab7634373ece95b6a12fd02b7081933b59b35726d383cf3\" pid:9173 exited_at:{seconds:1751869177 nanos:84124704}" Jul 7 06:19:37.423683 systemd[1]: Started sshd@27-145.40.90.175:22-147.75.109.163:46264.service - OpenSSH per-connection server daemon (147.75.109.163:46264). Jul 7 06:19:37.502654 sshd[9195]: Accepted publickey for core from 147.75.109.163 port 46264 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:19:37.503562 sshd-session[9195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:37.506899 systemd-logind[1913]: New session 25 of user core. Jul 7 06:19:37.530178 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 06:19:37.668382 sshd[9197]: Connection closed by 147.75.109.163 port 46264 Jul 7 06:19:37.668560 sshd-session[9195]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:37.670177 systemd[1]: sshd@27-145.40.90.175:22-147.75.109.163:46264.service: Deactivated successfully. Jul 7 06:19:37.671258 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 06:19:37.672392 systemd-logind[1913]: Session 25 logged out. Waiting for processes to exit. Jul 7 06:19:37.673065 systemd-logind[1913]: Removed session 25. Jul 7 06:19:37.885863 containerd[1923]: time="2025-07-07T06:19:37.885776460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caece83979c8d3ebef038719636f291ada72b89844d4a83077ef08b02c592b1f\" id:\"797ba4db13e0f89267e2eae4c5f83acbd0d13dda192db67b986ceb93ee5e30b2\" pid:9233 exited_at:{seconds:1751869177 nanos:885485591}" Jul 7 06:19:42.297732 containerd[1923]: time="2025-07-07T06:19:42.297688695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"006ad9a059b69c1b27eb2e76d6463f3af967294af00693cd672a742d06fc7d47\" id:\"c7db9ee0c258137e2ff51a4abe90add09e82d5bf4844ebc47d63c179894edcb7\" pid:9271 exited_at:{seconds:1751869182 nanos:297497711}" Jul 7 06:19:42.694681 systemd[1]: Started sshd@28-145.40.90.175:22-147.75.109.163:46270.service - OpenSSH per-connection server daemon (147.75.109.163:46270). Jul 7 06:19:42.724905 sshd[9295]: Accepted publickey for core from 147.75.109.163 port 46270 ssh2: RSA SHA256:S0KzqdC8bkayxagdx2EgNBSTYV05YFOCBof+IK8QDb4 Jul 7 06:19:42.728271 sshd-session[9295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:19:42.740754 systemd-logind[1913]: New session 26 of user core. Jul 7 06:19:42.758048 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 06:19:42.850563 sshd[9297]: Connection closed by 147.75.109.163 port 46270 Jul 7 06:19:42.850789 sshd-session[9295]: pam_unix(sshd:session): session closed for user core Jul 7 06:19:42.852901 systemd[1]: sshd@28-145.40.90.175:22-147.75.109.163:46270.service: Deactivated successfully. Jul 7 06:19:42.853812 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 06:19:42.854286 systemd-logind[1913]: Session 26 logged out. Waiting for processes to exit. Jul 7 06:19:42.854972 systemd-logind[1913]: Removed session 26.