Jul 16 00:02:40.905381 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Jul 15 22:01:05 -00 2025 Jul 16 00:02:40.905394 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 16 00:02:40.905401 kernel: BIOS-provided physical RAM map: Jul 16 00:02:40.905406 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Jul 16 00:02:40.905409 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Jul 16 00:02:40.905413 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Jul 16 00:02:40.905418 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Jul 16 00:02:40.905422 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Jul 16 00:02:40.905426 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819cbfff] usable Jul 16 00:02:40.905430 kernel: BIOS-e820: [mem 0x00000000819cc000-0x00000000819ccfff] ACPI NVS Jul 16 00:02:40.905434 kernel: BIOS-e820: [mem 0x00000000819cd000-0x00000000819cdfff] reserved Jul 16 00:02:40.905438 kernel: BIOS-e820: [mem 0x00000000819ce000-0x000000008afccfff] usable Jul 16 00:02:40.905442 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Jul 16 00:02:40.905446 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Jul 16 00:02:40.905451 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Jul 16 00:02:40.905457 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Jul 16 00:02:40.905461 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Jul 16 00:02:40.905465 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Jul 16 00:02:40.905470 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 16 00:02:40.905474 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Jul 16 00:02:40.905478 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Jul 16 00:02:40.905483 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 16 00:02:40.905487 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Jul 16 00:02:40.905492 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Jul 16 00:02:40.905496 kernel: NX (Execute Disable) protection: active Jul 16 00:02:40.905500 kernel: APIC: Static calls initialized Jul 16 00:02:40.905506 kernel: SMBIOS 3.2.1 present. Jul 16 00:02:40.905511 kernel: DMI: Supermicro SYS-5019C-MR/X11SCM-F, BIOS 1.9 09/16/2022 Jul 16 00:02:40.905515 kernel: DMI: Memory slots populated: 1/4 Jul 16 00:02:40.905520 kernel: tsc: Detected 3400.000 MHz processor Jul 16 00:02:40.905524 kernel: tsc: Detected 3399.906 MHz TSC Jul 16 00:02:40.905529 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 16 00:02:40.905534 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 16 00:02:40.905538 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Jul 16 00:02:40.905543 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Jul 16 00:02:40.905547 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 16 00:02:40.905553 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Jul 16 00:02:40.905557 kernel: Using GB pages for direct mapping Jul 16 00:02:40.905562 kernel: ACPI: Early table checksum verification disabled Jul 16 00:02:40.905567 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Jul 16 00:02:40.905573 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Jul 16 00:02:40.905578 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Jul 16 00:02:40.905583 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Jul 16 00:02:40.905588 kernel: ACPI: FACS 0x000000008C66CF80 000040 Jul 16 00:02:40.905593 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Jul 16 00:02:40.905598 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Jul 16 00:02:40.905603 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Jul 16 00:02:40.905607 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Jul 16 00:02:40.905612 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Jul 16 00:02:40.905617 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Jul 16 00:02:40.905623 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Jul 16 00:02:40.905627 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Jul 16 00:02:40.905632 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 16 00:02:40.905637 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Jul 16 00:02:40.905642 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Jul 16 00:02:40.905646 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 16 00:02:40.905651 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 16 00:02:40.905656 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Jul 16 00:02:40.905662 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Jul 16 00:02:40.905666 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Jul 16 00:02:40.905671 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Jul 16 00:02:40.905676 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Jul 16 00:02:40.905681 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Jul 16 00:02:40.905685 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Jul 16 00:02:40.905690 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Jul 16 00:02:40.905695 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Jul 16 00:02:40.905700 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Jul 16 00:02:40.905705 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Jul 16 00:02:40.905710 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Jul 16 00:02:40.905715 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Jul 16 00:02:40.905720 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Jul 16 00:02:40.905724 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Jul 16 00:02:40.905729 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Jul 16 00:02:40.905734 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Jul 16 00:02:40.905739 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Jul 16 00:02:40.905743 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Jul 16 00:02:40.905749 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Jul 16 00:02:40.905754 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Jul 16 00:02:40.905758 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Jul 16 00:02:40.905766 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Jul 16 00:02:40.905789 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Jul 16 00:02:40.905794 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Jul 16 00:02:40.905798 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Jul 16 00:02:40.905803 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Jul 16 00:02:40.905825 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Jul 16 00:02:40.905831 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Jul 16 00:02:40.905835 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Jul 16 00:02:40.905840 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Jul 16 00:02:40.905845 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Jul 16 00:02:40.905850 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Jul 16 00:02:40.905854 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Jul 16 00:02:40.905859 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Jul 16 00:02:40.905864 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Jul 16 00:02:40.905868 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Jul 16 00:02:40.905874 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Jul 16 00:02:40.905879 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Jul 16 00:02:40.905883 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Jul 16 00:02:40.905888 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Jul 16 00:02:40.905893 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Jul 16 00:02:40.905897 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Jul 16 00:02:40.905902 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Jul 16 00:02:40.905907 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Jul 16 00:02:40.905912 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Jul 16 00:02:40.905916 kernel: No NUMA configuration found Jul 16 00:02:40.905922 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Jul 16 00:02:40.905927 kernel: NODE_DATA(0) allocated [mem 0x86eff8dc0-0x86effffff] Jul 16 00:02:40.905932 kernel: Zone ranges: Jul 16 00:02:40.905936 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 16 00:02:40.905941 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 16 00:02:40.905946 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Jul 16 00:02:40.905951 kernel: Device empty Jul 16 00:02:40.905955 kernel: Movable zone start for each node Jul 16 00:02:40.905960 kernel: Early memory node ranges Jul 16 00:02:40.905966 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Jul 16 00:02:40.905970 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Jul 16 00:02:40.905975 kernel: node 0: [mem 0x0000000040400000-0x00000000819cbfff] Jul 16 00:02:40.905980 kernel: node 0: [mem 0x00000000819ce000-0x000000008afccfff] Jul 16 00:02:40.905985 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Jul 16 00:02:40.905993 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Jul 16 00:02:40.905998 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Jul 16 00:02:40.906003 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Jul 16 00:02:40.906009 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 16 00:02:40.906014 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Jul 16 00:02:40.906019 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 16 00:02:40.906024 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Jul 16 00:02:40.906029 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Jul 16 00:02:40.906034 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Jul 16 00:02:40.906040 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Jul 16 00:02:40.906045 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Jul 16 00:02:40.906050 kernel: ACPI: PM-Timer IO Port: 0x1808 Jul 16 00:02:40.906055 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 16 00:02:40.906061 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 16 00:02:40.906066 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 16 00:02:40.906071 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 16 00:02:40.906076 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 16 00:02:40.906081 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 16 00:02:40.906085 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 16 00:02:40.906090 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 16 00:02:40.906095 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 16 00:02:40.906101 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 16 00:02:40.906106 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 16 00:02:40.906111 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 16 00:02:40.906116 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 16 00:02:40.906121 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 16 00:02:40.906126 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 16 00:02:40.906131 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 16 00:02:40.906136 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Jul 16 00:02:40.906141 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 16 00:02:40.906146 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 16 00:02:40.906152 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 16 00:02:40.906157 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 16 00:02:40.906162 kernel: TSC deadline timer available Jul 16 00:02:40.906168 kernel: CPU topo: Max. logical packages: 1 Jul 16 00:02:40.906173 kernel: CPU topo: Max. logical dies: 1 Jul 16 00:02:40.906178 kernel: CPU topo: Max. dies per package: 1 Jul 16 00:02:40.906183 kernel: CPU topo: Max. threads per core: 2 Jul 16 00:02:40.906188 kernel: CPU topo: Num. cores per package: 8 Jul 16 00:02:40.906192 kernel: CPU topo: Num. threads per package: 16 Jul 16 00:02:40.906198 kernel: CPU topo: Allowing 16 present CPUs plus 0 hotplug CPUs Jul 16 00:02:40.906203 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Jul 16 00:02:40.906209 kernel: Booting paravirtualized kernel on bare hardware Jul 16 00:02:40.906214 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 16 00:02:40.906219 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jul 16 00:02:40.906224 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jul 16 00:02:40.906229 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jul 16 00:02:40.906234 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 16 00:02:40.906240 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 16 00:02:40.906246 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 16 00:02:40.906251 kernel: random: crng init done Jul 16 00:02:40.906256 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Jul 16 00:02:40.906261 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jul 16 00:02:40.906266 kernel: Fallback order for Node 0: 0 Jul 16 00:02:40.906271 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8363245 Jul 16 00:02:40.906276 kernel: Policy zone: Normal Jul 16 00:02:40.906281 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 16 00:02:40.906287 kernel: software IO TLB: area num 16. Jul 16 00:02:40.906293 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 16 00:02:40.906298 kernel: ftrace: allocating 40095 entries in 157 pages Jul 16 00:02:40.906303 kernel: ftrace: allocated 157 pages with 5 groups Jul 16 00:02:40.906308 kernel: Dynamic Preempt: voluntary Jul 16 00:02:40.906313 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 16 00:02:40.906318 kernel: rcu: RCU event tracing is enabled. Jul 16 00:02:40.906323 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 16 00:02:40.906329 kernel: Trampoline variant of Tasks RCU enabled. Jul 16 00:02:40.906335 kernel: Rude variant of Tasks RCU enabled. Jul 16 00:02:40.906340 kernel: Tracing variant of Tasks RCU enabled. Jul 16 00:02:40.906345 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 16 00:02:40.906350 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 16 00:02:40.906355 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 16 00:02:40.906360 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 16 00:02:40.906365 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 16 00:02:40.906370 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Jul 16 00:02:40.906375 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 16 00:02:40.906381 kernel: Console: colour VGA+ 80x25 Jul 16 00:02:40.906386 kernel: printk: legacy console [tty0] enabled Jul 16 00:02:40.906391 kernel: printk: legacy console [ttyS1] enabled Jul 16 00:02:40.906396 kernel: ACPI: Core revision 20240827 Jul 16 00:02:40.906401 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Jul 16 00:02:40.906406 kernel: APIC: Switch to symmetric I/O mode setup Jul 16 00:02:40.906411 kernel: DMAR: Host address width 39 Jul 16 00:02:40.906416 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Jul 16 00:02:40.906421 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Jul 16 00:02:40.906427 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Jul 16 00:02:40.906432 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Jul 16 00:02:40.906437 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Jul 16 00:02:40.906443 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Jul 16 00:02:40.906448 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Jul 16 00:02:40.906453 kernel: x2apic enabled Jul 16 00:02:40.906458 kernel: APIC: Switched APIC routing to: cluster x2apic Jul 16 00:02:40.906463 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Jul 16 00:02:40.906468 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Jul 16 00:02:40.906474 kernel: CPU0: Thermal monitoring enabled (TM1) Jul 16 00:02:40.906479 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 16 00:02:40.906484 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 16 00:02:40.906489 kernel: process: using mwait in idle threads Jul 16 00:02:40.906494 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 16 00:02:40.906499 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Jul 16 00:02:40.906504 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 16 00:02:40.906509 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 16 00:02:40.906514 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 16 00:02:40.906519 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 16 00:02:40.906524 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 16 00:02:40.906530 kernel: TAA: Mitigation: TSX disabled Jul 16 00:02:40.906535 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Jul 16 00:02:40.906540 kernel: SRBDS: Mitigation: Microcode Jul 16 00:02:40.906545 kernel: GDS: Vulnerable: No microcode Jul 16 00:02:40.906550 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 16 00:02:40.906555 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 16 00:02:40.906560 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 16 00:02:40.906565 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 16 00:02:40.906570 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 16 00:02:40.906575 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 16 00:02:40.906580 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 16 00:02:40.906586 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 16 00:02:40.906591 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 16 00:02:40.906596 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Jul 16 00:02:40.906601 kernel: Freeing SMP alternatives memory: 32K Jul 16 00:02:40.906606 kernel: pid_max: default: 32768 minimum: 301 Jul 16 00:02:40.906611 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 16 00:02:40.906616 kernel: landlock: Up and running. Jul 16 00:02:40.906621 kernel: SELinux: Initializing. Jul 16 00:02:40.906626 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 16 00:02:40.906631 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 16 00:02:40.906636 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 16 00:02:40.906641 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Jul 16 00:02:40.906647 kernel: ... version: 4 Jul 16 00:02:40.906652 kernel: ... bit width: 48 Jul 16 00:02:40.906657 kernel: ... generic registers: 4 Jul 16 00:02:40.906662 kernel: ... value mask: 0000ffffffffffff Jul 16 00:02:40.906667 kernel: ... max period: 00007fffffffffff Jul 16 00:02:40.906672 kernel: ... fixed-purpose events: 3 Jul 16 00:02:40.906677 kernel: ... event mask: 000000070000000f Jul 16 00:02:40.906682 kernel: signal: max sigframe size: 2032 Jul 16 00:02:40.906687 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Jul 16 00:02:40.906693 kernel: rcu: Hierarchical SRCU implementation. Jul 16 00:02:40.906698 kernel: rcu: Max phase no-delay instances is 400. Jul 16 00:02:40.906703 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jul 16 00:02:40.906708 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Jul 16 00:02:40.906713 kernel: smp: Bringing up secondary CPUs ... Jul 16 00:02:40.906718 kernel: smpboot: x86: Booting SMP configuration: Jul 16 00:02:40.906723 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Jul 16 00:02:40.906729 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 16 00:02:40.906735 kernel: smp: Brought up 1 node, 16 CPUs Jul 16 00:02:40.906740 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Jul 16 00:02:40.906745 kernel: Memory: 32695448K/33452980K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 732512K reserved, 0K cma-reserved) Jul 16 00:02:40.906750 kernel: devtmpfs: initialized Jul 16 00:02:40.906755 kernel: x86/mm: Memory block size: 128MB Jul 16 00:02:40.906760 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819cc000-0x819ccfff] (4096 bytes) Jul 16 00:02:40.906767 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Jul 16 00:02:40.906773 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 16 00:02:40.906778 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 16 00:02:40.906802 kernel: pinctrl core: initialized pinctrl subsystem Jul 16 00:02:40.906808 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 16 00:02:40.906813 kernel: audit: initializing netlink subsys (disabled) Jul 16 00:02:40.906831 kernel: audit: type=2000 audit(1752624152.041:1): state=initialized audit_enabled=0 res=1 Jul 16 00:02:40.906836 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 16 00:02:40.906841 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 16 00:02:40.906847 kernel: cpuidle: using governor menu Jul 16 00:02:40.906852 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 16 00:02:40.906857 kernel: dca service started, version 1.12.1 Jul 16 00:02:40.906863 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 16 00:02:40.906868 kernel: PCI: Using configuration type 1 for base access Jul 16 00:02:40.906873 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 16 00:02:40.906878 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 16 00:02:40.906883 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 16 00:02:40.906888 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 16 00:02:40.906893 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 16 00:02:40.906898 kernel: ACPI: Added _OSI(Module Device) Jul 16 00:02:40.906903 kernel: ACPI: Added _OSI(Processor Device) Jul 16 00:02:40.906909 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 16 00:02:40.906915 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Jul 16 00:02:40.906920 kernel: ACPI: Dynamic OEM Table Load: Jul 16 00:02:40.906925 kernel: ACPI: SSDT 0xFFFF8E6502096400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Jul 16 00:02:40.906930 kernel: ACPI: Dynamic OEM Table Load: Jul 16 00:02:40.906935 kernel: ACPI: SSDT 0xFFFF8E6502163000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Jul 16 00:02:40.906940 kernel: ACPI: Dynamic OEM Table Load: Jul 16 00:02:40.906945 kernel: ACPI: SSDT 0xFFFF8E6500249800 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Jul 16 00:02:40.906950 kernel: ACPI: Dynamic OEM Table Load: Jul 16 00:02:40.906955 kernel: ACPI: SSDT 0xFFFF8E6502160800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Jul 16 00:02:40.906960 kernel: ACPI: Dynamic OEM Table Load: Jul 16 00:02:40.906965 kernel: ACPI: SSDT 0xFFFF8E65001A6000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Jul 16 00:02:40.906970 kernel: ACPI: Dynamic OEM Table Load: Jul 16 00:02:40.906975 kernel: ACPI: SSDT 0xFFFF8E6502093800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Jul 16 00:02:40.906981 kernel: ACPI: Interpreter enabled Jul 16 00:02:40.906986 kernel: ACPI: PM: (supports S0 S5) Jul 16 00:02:40.906991 kernel: ACPI: Using IOAPIC for interrupt routing Jul 16 00:02:40.906996 kernel: HEST: Enabling Firmware First mode for corrected errors. Jul 16 00:02:40.907001 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Jul 16 00:02:40.907007 kernel: HEST: Table parsing has been initialized. Jul 16 00:02:40.907012 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Jul 16 00:02:40.907017 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 16 00:02:40.907022 kernel: PCI: Using E820 reservations for host bridge windows Jul 16 00:02:40.907027 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Jul 16 00:02:40.907032 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Jul 16 00:02:40.907037 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Jul 16 00:02:40.907042 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Jul 16 00:02:40.907047 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Jul 16 00:02:40.907053 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Jul 16 00:02:40.907059 kernel: ACPI: \_TZ_.FN00: New power resource Jul 16 00:02:40.907064 kernel: ACPI: \_TZ_.FN01: New power resource Jul 16 00:02:40.907069 kernel: ACPI: \_TZ_.FN02: New power resource Jul 16 00:02:40.907074 kernel: ACPI: \_TZ_.FN03: New power resource Jul 16 00:02:40.907079 kernel: ACPI: \_TZ_.FN04: New power resource Jul 16 00:02:40.907084 kernel: ACPI: \PIN_: New power resource Jul 16 00:02:40.907089 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Jul 16 00:02:40.907160 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 16 00:02:40.907210 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Jul 16 00:02:40.907256 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Jul 16 00:02:40.907264 kernel: PCI host bridge to bus 0000:00 Jul 16 00:02:40.907310 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 16 00:02:40.907352 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 16 00:02:40.907392 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 16 00:02:40.907433 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Jul 16 00:02:40.907473 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Jul 16 00:02:40.907512 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Jul 16 00:02:40.907568 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 conventional PCI endpoint Jul 16 00:02:40.907626 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 PCIe Root Port Jul 16 00:02:40.907674 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 16 00:02:40.907723 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 16 00:02:40.907773 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 16 00:02:40.907856 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Jul 16 00:02:40.907907 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 conventional PCI endpoint Jul 16 00:02:40.907954 kernel: pci 0000:00:08.0: BAR 0 [mem 0x9551f000-0x9551ffff 64bit] Jul 16 00:02:40.908004 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 conventional PCI endpoint Jul 16 00:02:40.908050 kernel: pci 0000:00:12.0: BAR 0 [mem 0x9551e000-0x9551efff 64bit] Jul 16 00:02:40.908105 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 conventional PCI endpoint Jul 16 00:02:40.908152 kernel: pci 0000:00:14.0: BAR 0 [mem 0x95500000-0x9550ffff 64bit] Jul 16 00:02:40.908197 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Jul 16 00:02:40.908246 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 conventional PCI endpoint Jul 16 00:02:40.908292 kernel: pci 0000:00:14.2: BAR 0 [mem 0x95512000-0x95513fff 64bit] Jul 16 00:02:40.908337 kernel: pci 0000:00:14.2: BAR 2 [mem 0x9551d000-0x9551dfff 64bit] Jul 16 00:02:40.908389 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 conventional PCI endpoint Jul 16 00:02:40.908442 kernel: pci 0000:00:15.0: BAR 0 [mem 0x00000000-0x00000fff 64bit] Jul 16 00:02:40.908498 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 conventional PCI endpoint Jul 16 00:02:40.908545 kernel: pci 0000:00:15.1: BAR 0 [mem 0x00000000-0x00000fff 64bit] Jul 16 00:02:40.908596 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 conventional PCI endpoint Jul 16 00:02:40.908643 kernel: pci 0000:00:16.0: BAR 0 [mem 0x9551a000-0x9551afff 64bit] Jul 16 00:02:40.908690 kernel: pci 0000:00:16.0: PME# supported from D3hot Jul 16 00:02:40.908740 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 conventional PCI endpoint Jul 16 00:02:40.908813 kernel: pci 0000:00:16.1: BAR 0 [mem 0x95519000-0x95519fff 64bit] Jul 16 00:02:40.908876 kernel: pci 0000:00:16.1: PME# supported from D3hot Jul 16 00:02:40.908926 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 conventional PCI endpoint Jul 16 00:02:40.908973 kernel: pci 0000:00:16.4: BAR 0 [mem 0x95518000-0x95518fff 64bit] Jul 16 00:02:40.909022 kernel: pci 0000:00:16.4: PME# supported from D3hot Jul 16 00:02:40.909071 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 conventional PCI endpoint Jul 16 00:02:40.909117 kernel: pci 0000:00:17.0: BAR 0 [mem 0x95510000-0x95511fff] Jul 16 00:02:40.909163 kernel: pci 0000:00:17.0: BAR 1 [mem 0x95517000-0x955170ff] Jul 16 00:02:40.909209 kernel: pci 0000:00:17.0: BAR 2 [io 0x6050-0x6057] Jul 16 00:02:40.909256 kernel: pci 0000:00:17.0: BAR 3 [io 0x6040-0x6043] Jul 16 00:02:40.909301 kernel: pci 0000:00:17.0: BAR 4 [io 0x6020-0x603f] Jul 16 00:02:40.909347 kernel: pci 0000:00:17.0: BAR 5 [mem 0x95516000-0x955167ff] Jul 16 00:02:40.909393 kernel: pci 0000:00:17.0: PME# supported from D3hot Jul 16 00:02:40.909443 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 PCIe Root Port Jul 16 00:02:40.909490 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 16 00:02:40.909536 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Jul 16 00:02:40.909590 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 PCIe Root Port Jul 16 00:02:40.909637 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 16 00:02:40.909683 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 16 00:02:40.909729 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 16 00:02:40.909778 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Jul 16 00:02:40.909871 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 PCIe Root Port Jul 16 00:02:40.909922 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 16 00:02:40.909968 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 16 00:02:40.910014 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 16 00:02:40.910060 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Jul 16 00:02:40.910112 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 PCIe Root Port Jul 16 00:02:40.910159 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 16 00:02:40.910205 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Jul 16 00:02:40.910256 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 PCIe Root Port Jul 16 00:02:40.910305 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 16 00:02:40.910351 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 16 00:02:40.910397 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 16 00:02:40.910444 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Jul 16 00:02:40.910494 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 conventional PCI endpoint Jul 16 00:02:40.910542 kernel: pci 0000:00:1e.0: BAR 0 [mem 0x00000000-0x00000fff 64bit] Jul 16 00:02:40.910594 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 conventional PCI endpoint Jul 16 00:02:40.910644 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 conventional PCI endpoint Jul 16 00:02:40.910691 kernel: pci 0000:00:1f.4: BAR 0 [mem 0x95514000-0x955140ff 64bit] Jul 16 00:02:40.910736 kernel: pci 0000:00:1f.4: BAR 4 [io 0xefa0-0xefbf] Jul 16 00:02:40.910815 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 conventional PCI endpoint Jul 16 00:02:40.910878 kernel: pci 0000:00:1f.5: BAR 0 [mem 0xfe010000-0xfe010fff] Jul 16 00:02:40.910931 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 16 00:02:40.910982 kernel: pci 0000:01:00.0: BAR 0 [mem 0x92000000-0x93ffffff 64bit pref] Jul 16 00:02:40.911029 kernel: pci 0000:01:00.0: ROM [mem 0x95200000-0x952fffff pref] Jul 16 00:02:40.911076 kernel: pci 0000:01:00.0: PME# supported from D3cold Jul 16 00:02:40.911124 kernel: pci 0000:01:00.0: VF BAR 0 [mem 0x00000000-0x000fffff 64bit pref] Jul 16 00:02:40.911170 kernel: pci 0000:01:00.0: VF BAR 0 [mem 0x00000000-0x007fffff 64bit pref]: contains BAR 0 for 8 VFs Jul 16 00:02:40.911223 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 PCIe Endpoint Jul 16 00:02:40.911274 kernel: pci 0000:01:00.1: BAR 0 [mem 0x90000000-0x91ffffff 64bit pref] Jul 16 00:02:40.911322 kernel: pci 0000:01:00.1: ROM [mem 0x95100000-0x951fffff pref] Jul 16 00:02:40.911369 kernel: pci 0000:01:00.1: PME# supported from D3cold Jul 16 00:02:40.911416 kernel: pci 0000:01:00.1: VF BAR 0 [mem 0x00000000-0x000fffff 64bit pref] Jul 16 00:02:40.911465 kernel: pci 0000:01:00.1: VF BAR 0 [mem 0x00000000-0x007fffff 64bit pref]: contains BAR 0 for 8 VFs Jul 16 00:02:40.911513 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 16 00:02:40.911560 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 16 00:02:40.911611 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Jul 16 00:02:40.911661 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 PCIe Endpoint Jul 16 00:02:40.911707 kernel: pci 0000:03:00.0: BAR 0 [mem 0x95400000-0x9547ffff] Jul 16 00:02:40.911754 kernel: pci 0000:03:00.0: BAR 2 [io 0x5000-0x501f] Jul 16 00:02:40.911826 kernel: pci 0000:03:00.0: BAR 3 [mem 0x95480000-0x95483fff] Jul 16 00:02:40.911887 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 16 00:02:40.911934 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 16 00:02:40.911985 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Jul 16 00:02:40.912035 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 PCIe Endpoint Jul 16 00:02:40.912082 kernel: pci 0000:04:00.0: BAR 0 [mem 0x95300000-0x9537ffff] Jul 16 00:02:40.912129 kernel: pci 0000:04:00.0: BAR 2 [io 0x4000-0x401f] Jul 16 00:02:40.912176 kernel: pci 0000:04:00.0: BAR 3 [mem 0x95380000-0x95383fff] Jul 16 00:02:40.912223 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Jul 16 00:02:40.912270 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 16 00:02:40.912316 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 16 00:02:40.912371 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jul 16 00:02:40.912418 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 16 00:02:40.912465 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 16 00:02:40.912512 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 16 00:02:40.912559 kernel: pci 0000:06:00.0: enabling Extended Tags Jul 16 00:02:40.912605 kernel: pci 0000:06:00.0: supports D1 D2 Jul 16 00:02:40.912652 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 16 00:02:40.912701 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 16 00:02:40.912755 kernel: pci_bus 0000:07: extended config space not accessible Jul 16 00:02:40.912849 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 conventional PCI endpoint Jul 16 00:02:40.912900 kernel: pci 0000:07:00.0: BAR 0 [mem 0x94000000-0x94ffffff] Jul 16 00:02:40.912949 kernel: pci 0000:07:00.0: BAR 1 [mem 0x95000000-0x9501ffff] Jul 16 00:02:40.912998 kernel: pci 0000:07:00.0: BAR 2 [io 0x3000-0x307f] Jul 16 00:02:40.913046 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 16 00:02:40.913097 kernel: pci 0000:07:00.0: supports D1 D2 Jul 16 00:02:40.913146 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 16 00:02:40.913194 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 16 00:02:40.913201 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Jul 16 00:02:40.913207 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Jul 16 00:02:40.913213 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Jul 16 00:02:40.913218 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Jul 16 00:02:40.913224 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Jul 16 00:02:40.913231 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Jul 16 00:02:40.913236 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Jul 16 00:02:40.913242 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Jul 16 00:02:40.913247 kernel: iommu: Default domain type: Translated Jul 16 00:02:40.913253 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 16 00:02:40.913258 kernel: PCI: Using ACPI for IRQ routing Jul 16 00:02:40.913264 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 16 00:02:40.913269 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Jul 16 00:02:40.913274 kernel: e820: reserve RAM buffer [mem 0x819cc000-0x83ffffff] Jul 16 00:02:40.913281 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Jul 16 00:02:40.913286 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Jul 16 00:02:40.913291 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Jul 16 00:02:40.913297 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Jul 16 00:02:40.913345 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Jul 16 00:02:40.913393 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Jul 16 00:02:40.913442 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 16 00:02:40.913450 kernel: vgaarb: loaded Jul 16 00:02:40.913455 kernel: clocksource: Switched to clocksource tsc-early Jul 16 00:02:40.913462 kernel: VFS: Disk quotas dquot_6.6.0 Jul 16 00:02:40.913468 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 16 00:02:40.913473 kernel: pnp: PnP ACPI init Jul 16 00:02:40.913520 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Jul 16 00:02:40.913567 kernel: pnp 00:02: [dma 0 disabled] Jul 16 00:02:40.913612 kernel: pnp 00:03: [dma 0 disabled] Jul 16 00:02:40.913658 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Jul 16 00:02:40.913702 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Jul 16 00:02:40.913751 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Jul 16 00:02:40.913852 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Jul 16 00:02:40.913896 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Jul 16 00:02:40.913937 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Jul 16 00:02:40.913978 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Jul 16 00:02:40.914020 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Jul 16 00:02:40.914064 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Jul 16 00:02:40.914106 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Jul 16 00:02:40.914151 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Jul 16 00:02:40.914193 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Jul 16 00:02:40.914235 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Jul 16 00:02:40.914276 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Jul 16 00:02:40.914318 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Jul 16 00:02:40.914361 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Jul 16 00:02:40.914403 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Jul 16 00:02:40.914448 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Jul 16 00:02:40.914456 kernel: pnp: PnP ACPI: found 9 devices Jul 16 00:02:40.914462 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 16 00:02:40.914468 kernel: NET: Registered PF_INET protocol family Jul 16 00:02:40.914473 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 16 00:02:40.914480 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 16 00:02:40.914486 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 16 00:02:40.914491 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 16 00:02:40.914497 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 16 00:02:40.914502 kernel: TCP: Hash tables configured (established 262144 bind 65536) Jul 16 00:02:40.914508 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 16 00:02:40.914513 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 16 00:02:40.914519 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 16 00:02:40.914524 kernel: NET: Registered PF_XDP protocol family Jul 16 00:02:40.914571 kernel: pci 0000:00:15.0: BAR 0 [mem 0x95515000-0x95515fff 64bit]: assigned Jul 16 00:02:40.914618 kernel: pci 0000:00:15.1: BAR 0 [mem 0x9551b000-0x9551bfff 64bit]: assigned Jul 16 00:02:40.914665 kernel: pci 0000:00:1e.0: BAR 0 [mem 0x9551c000-0x9551cfff 64bit]: assigned Jul 16 00:02:40.914713 kernel: pci 0000:01:00.0: VF BAR 0 [mem size 0x00800000 64bit pref]: can't assign; no space Jul 16 00:02:40.914762 kernel: pci 0000:01:00.0: VF BAR 0 [mem size 0x00800000 64bit pref]: failed to assign Jul 16 00:02:40.914851 kernel: pci 0000:01:00.1: VF BAR 0 [mem size 0x00800000 64bit pref]: can't assign; no space Jul 16 00:02:40.914899 kernel: pci 0000:01:00.1: VF BAR 0 [mem size 0x00800000 64bit pref]: failed to assign Jul 16 00:02:40.914947 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 16 00:02:40.914993 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Jul 16 00:02:40.915039 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Jul 16 00:02:40.915086 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Jul 16 00:02:40.915132 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Jul 16 00:02:40.915178 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Jul 16 00:02:40.915227 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Jul 16 00:02:40.915273 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Jul 16 00:02:40.915320 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Jul 16 00:02:40.915366 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Jul 16 00:02:40.915412 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Jul 16 00:02:40.915459 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Jul 16 00:02:40.915507 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Jul 16 00:02:40.915554 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Jul 16 00:02:40.915600 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Jul 16 00:02:40.915646 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Jul 16 00:02:40.915694 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Jul 16 00:02:40.915736 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Jul 16 00:02:40.915780 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 16 00:02:40.915853 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 16 00:02:40.915893 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 16 00:02:40.915933 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Jul 16 00:02:40.915973 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Jul 16 00:02:40.916021 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Jul 16 00:02:40.916067 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Jul 16 00:02:40.916113 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Jul 16 00:02:40.916155 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Jul 16 00:02:40.916201 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 16 00:02:40.916243 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Jul 16 00:02:40.916292 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Jul 16 00:02:40.916336 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Jul 16 00:02:40.916381 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Jul 16 00:02:40.916425 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Jul 16 00:02:40.916433 kernel: PCI: CLS 64 bytes, default 64 Jul 16 00:02:40.916438 kernel: DMAR: No ATSR found Jul 16 00:02:40.916444 kernel: DMAR: No SATC found Jul 16 00:02:40.916449 kernel: DMAR: dmar0: Using Queued invalidation Jul 16 00:02:40.916495 kernel: pci 0000:00:00.0: Adding to iommu group 0 Jul 16 00:02:40.916546 kernel: pci 0000:00:01.0: Adding to iommu group 1 Jul 16 00:02:40.916592 kernel: pci 0000:00:08.0: Adding to iommu group 2 Jul 16 00:02:40.916638 kernel: pci 0000:00:12.0: Adding to iommu group 3 Jul 16 00:02:40.916686 kernel: pci 0000:00:14.0: Adding to iommu group 4 Jul 16 00:02:40.916731 kernel: pci 0000:00:14.2: Adding to iommu group 4 Jul 16 00:02:40.916801 kernel: pci 0000:00:15.0: Adding to iommu group 5 Jul 16 00:02:40.916861 kernel: pci 0000:00:15.1: Adding to iommu group 5 Jul 16 00:02:40.916907 kernel: pci 0000:00:16.0: Adding to iommu group 6 Jul 16 00:02:40.916955 kernel: pci 0000:00:16.1: Adding to iommu group 6 Jul 16 00:02:40.917002 kernel: pci 0000:00:16.4: Adding to iommu group 6 Jul 16 00:02:40.917048 kernel: pci 0000:00:17.0: Adding to iommu group 7 Jul 16 00:02:40.917096 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Jul 16 00:02:40.917142 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Jul 16 00:02:40.917188 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Jul 16 00:02:40.917234 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Jul 16 00:02:40.917280 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Jul 16 00:02:40.917327 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Jul 16 00:02:40.917374 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Jul 16 00:02:40.917420 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Jul 16 00:02:40.917465 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Jul 16 00:02:40.917513 kernel: pci 0000:01:00.0: Adding to iommu group 1 Jul 16 00:02:40.917560 kernel: pci 0000:01:00.1: Adding to iommu group 1 Jul 16 00:02:40.917607 kernel: pci 0000:03:00.0: Adding to iommu group 15 Jul 16 00:02:40.917656 kernel: pci 0000:04:00.0: Adding to iommu group 16 Jul 16 00:02:40.917705 kernel: pci 0000:06:00.0: Adding to iommu group 17 Jul 16 00:02:40.917755 kernel: pci 0000:07:00.0: Adding to iommu group 17 Jul 16 00:02:40.917763 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Jul 16 00:02:40.917771 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 16 00:02:40.917777 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Jul 16 00:02:40.917801 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Jul 16 00:02:40.917806 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Jul 16 00:02:40.917812 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Jul 16 00:02:40.917831 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Jul 16 00:02:40.917883 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Jul 16 00:02:40.917892 kernel: Initialise system trusted keyrings Jul 16 00:02:40.917897 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Jul 16 00:02:40.917902 kernel: Key type asymmetric registered Jul 16 00:02:40.917908 kernel: Asymmetric key parser 'x509' registered Jul 16 00:02:40.917913 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Jul 16 00:02:40.917919 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Jul 16 00:02:40.917924 kernel: clocksource: Switched to clocksource tsc Jul 16 00:02:40.917931 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 16 00:02:40.917937 kernel: io scheduler mq-deadline registered Jul 16 00:02:40.917942 kernel: io scheduler kyber registered Jul 16 00:02:40.917947 kernel: io scheduler bfq registered Jul 16 00:02:40.917994 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Jul 16 00:02:40.918041 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Jul 16 00:02:40.918087 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Jul 16 00:02:40.918134 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Jul 16 00:02:40.918181 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Jul 16 00:02:40.918230 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Jul 16 00:02:40.918281 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Jul 16 00:02:40.918290 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Jul 16 00:02:40.918295 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Jul 16 00:02:40.918301 kernel: pstore: Using crash dump compression: deflate Jul 16 00:02:40.918306 kernel: pstore: Registered erst as persistent store backend Jul 16 00:02:40.918312 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 16 00:02:40.918317 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 16 00:02:40.918324 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 16 00:02:40.918330 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 16 00:02:40.918335 kernel: hpet_acpi_add: no address or irqs in _CRS Jul 16 00:02:40.918381 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Jul 16 00:02:40.918389 kernel: i8042: PNP: No PS/2 controller found. Jul 16 00:02:40.918431 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Jul 16 00:02:40.918473 kernel: rtc_cmos rtc_cmos: registered as rtc0 Jul 16 00:02:40.918516 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-07-16T00:02:39 UTC (1752624159) Jul 16 00:02:40.918560 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Jul 16 00:02:40.918568 kernel: intel_pstate: Intel P-state driver initializing Jul 16 00:02:40.918574 kernel: intel_pstate: Disabling energy efficiency optimization Jul 16 00:02:40.918579 kernel: intel_pstate: HWP enabled Jul 16 00:02:40.918584 kernel: NET: Registered PF_INET6 protocol family Jul 16 00:02:40.918590 kernel: Segment Routing with IPv6 Jul 16 00:02:40.918595 kernel: In-situ OAM (IOAM) with IPv6 Jul 16 00:02:40.918601 kernel: NET: Registered PF_PACKET protocol family Jul 16 00:02:40.918606 kernel: Key type dns_resolver registered Jul 16 00:02:40.918613 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Jul 16 00:02:40.918618 kernel: microcode: Current revision: 0x000000f4 Jul 16 00:02:40.918624 kernel: IPI shorthand broadcast: enabled Jul 16 00:02:40.918629 kernel: sched_clock: Marking stable (3791064217, 1495634472)->(6878141674, -1591442985) Jul 16 00:02:40.918635 kernel: registered taskstats version 1 Jul 16 00:02:40.918640 kernel: Loading compiled-in X.509 certificates Jul 16 00:02:40.918645 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: cfc533be64675f3c66ee10d42aa8c5ce2115881d' Jul 16 00:02:40.918651 kernel: Demotion targets for Node 0: null Jul 16 00:02:40.918656 kernel: Key type .fscrypt registered Jul 16 00:02:40.918662 kernel: Key type fscrypt-provisioning registered Jul 16 00:02:40.918668 kernel: ima: Allocated hash algorithm: sha1 Jul 16 00:02:40.918673 kernel: ima: No architecture policies found Jul 16 00:02:40.918678 kernel: clk: Disabling unused clocks Jul 16 00:02:40.918684 kernel: Warning: unable to open an initial console. Jul 16 00:02:40.918689 kernel: Freeing unused kernel image (initmem) memory: 54424K Jul 16 00:02:40.918695 kernel: Write protecting the kernel read-only data: 24576k Jul 16 00:02:40.918700 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 16 00:02:40.918706 kernel: Run /init as init process Jul 16 00:02:40.918712 kernel: with arguments: Jul 16 00:02:40.918717 kernel: /init Jul 16 00:02:40.918723 kernel: with environment: Jul 16 00:02:40.918728 kernel: HOME=/ Jul 16 00:02:40.918733 kernel: TERM=linux Jul 16 00:02:40.918738 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 16 00:02:40.918744 systemd[1]: Successfully made /usr/ read-only. Jul 16 00:02:40.918752 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 16 00:02:40.918759 systemd[1]: Detected architecture x86-64. Jul 16 00:02:40.918766 systemd[1]: Running in initrd. Jul 16 00:02:40.918772 systemd[1]: No hostname configured, using default hostname. Jul 16 00:02:40.918777 systemd[1]: Hostname set to . Jul 16 00:02:40.918801 systemd[1]: Initializing machine ID from random generator. Jul 16 00:02:40.918807 systemd[1]: Queued start job for default target initrd.target. Jul 16 00:02:40.918813 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 16 00:02:40.918833 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 16 00:02:40.918839 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 16 00:02:40.918845 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 16 00:02:40.918850 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 16 00:02:40.918856 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 16 00:02:40.918863 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 16 00:02:40.918869 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 16 00:02:40.918875 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 16 00:02:40.918881 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 16 00:02:40.918887 systemd[1]: Reached target paths.target - Path Units. Jul 16 00:02:40.918892 systemd[1]: Reached target slices.target - Slice Units. Jul 16 00:02:40.918898 systemd[1]: Reached target swap.target - Swaps. Jul 16 00:02:40.918903 systemd[1]: Reached target timers.target - Timer Units. Jul 16 00:02:40.918909 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 16 00:02:40.918914 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 16 00:02:40.918921 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 16 00:02:40.918927 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 16 00:02:40.918932 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 16 00:02:40.918938 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 16 00:02:40.918944 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 16 00:02:40.918949 systemd[1]: Reached target sockets.target - Socket Units. Jul 16 00:02:40.918955 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 16 00:02:40.918960 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 16 00:02:40.918966 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 16 00:02:40.918973 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 16 00:02:40.918978 systemd[1]: Starting systemd-fsck-usr.service... Jul 16 00:02:40.918984 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 16 00:02:40.919000 systemd-journald[300]: Collecting audit messages is disabled. Jul 16 00:02:40.919015 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 16 00:02:40.919021 systemd-journald[300]: Journal started Jul 16 00:02:40.919034 systemd-journald[300]: Runtime Journal (/run/log/journal/3548c402a2bc46d3b76833f89d63456d) is 8M, max 640.1M, 632.1M free. Jul 16 00:02:40.930372 systemd-modules-load[302]: Inserted module 'overlay' Jul 16 00:02:40.945935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:02:40.945949 systemd[1]: Started systemd-journald.service - Journal Service. Jul 16 00:02:40.969809 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 16 00:02:40.975782 kernel: Bridge firewalling registered Jul 16 00:02:40.975847 systemd-modules-load[302]: Inserted module 'br_netfilter' Jul 16 00:02:40.979239 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 16 00:02:40.979647 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 16 00:02:40.980004 systemd[1]: Finished systemd-fsck-usr.service. Jul 16 00:02:40.980340 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 16 00:02:40.983305 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 16 00:02:40.984410 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 16 00:02:40.985536 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 16 00:02:40.999604 systemd-tmpfiles[314]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 16 00:02:41.000278 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 16 00:02:41.008421 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 16 00:02:41.129853 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 16 00:02:41.160603 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:02:41.184348 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 16 00:02:41.204022 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 16 00:02:41.232270 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 16 00:02:41.238282 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 16 00:02:41.247043 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 16 00:02:41.254388 systemd-resolved[325]: Positive Trust Anchors: Jul 16 00:02:41.254394 systemd-resolved[325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 16 00:02:41.254428 systemd-resolved[325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 16 00:02:41.256690 systemd-resolved[325]: Defaulting to hostname 'linux'. Jul 16 00:02:41.257577 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 16 00:02:41.275990 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 16 00:02:41.289985 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 16 00:02:41.404721 dracut-cmdline[341]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=e99cfd77676fb46bb6e7e7d8fcebb095dd84f43a354bdf152777c6b07182cd66 Jul 16 00:02:41.585813 kernel: SCSI subsystem initialized Jul 16 00:02:41.598796 kernel: Loading iSCSI transport class v2.0-870. Jul 16 00:02:41.611785 kernel: iscsi: registered transport (tcp) Jul 16 00:02:41.635538 kernel: iscsi: registered transport (qla4xxx) Jul 16 00:02:41.635554 kernel: QLogic iSCSI HBA Driver Jul 16 00:02:41.645697 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 16 00:02:41.677646 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 16 00:02:41.688975 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 16 00:02:41.806303 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 16 00:02:41.818593 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 16 00:02:41.942811 kernel: raid6: avx2x4 gen() 20585 MB/s Jul 16 00:02:41.963798 kernel: raid6: avx2x2 gen() 42191 MB/s Jul 16 00:02:41.989830 kernel: raid6: avx2x1 gen() 46243 MB/s Jul 16 00:02:41.989846 kernel: raid6: using algorithm avx2x1 gen() 46243 MB/s Jul 16 00:02:42.016935 kernel: raid6: .... xor() 24966 MB/s, rmw enabled Jul 16 00:02:42.016955 kernel: raid6: using avx2x2 recovery algorithm Jul 16 00:02:42.037797 kernel: xor: automatically using best checksumming function avx Jul 16 00:02:42.141803 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 16 00:02:42.145316 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 16 00:02:42.155918 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 16 00:02:42.199715 systemd-udevd[553]: Using default interface naming scheme 'v255'. Jul 16 00:02:42.202892 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 16 00:02:42.230565 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 16 00:02:42.263796 dracut-pre-trigger[565]: rd.md=0: removing MD RAID activation Jul 16 00:02:42.277105 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 16 00:02:42.278020 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 16 00:02:42.368059 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 16 00:02:42.384781 kernel: cryptd: max_cpu_qlen set to 1000 Jul 16 00:02:42.401838 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 16 00:02:42.401868 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 16 00:02:42.405078 kernel: PTP clock support registered Jul 16 00:02:42.406903 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 16 00:02:42.423892 kernel: AES CTR mode by8 optimization enabled Jul 16 00:02:42.423908 kernel: libata version 3.00 loaded. Jul 16 00:02:42.424080 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 16 00:02:42.483117 kernel: ACPI: bus type USB registered Jul 16 00:02:42.483134 kernel: usbcore: registered new interface driver usbfs Jul 16 00:02:42.483145 kernel: usbcore: registered new interface driver hub Jul 16 00:02:42.483153 kernel: usbcore: registered new device driver usb Jul 16 00:02:42.483159 kernel: ahci 0000:00:17.0: version 3.0 Jul 16 00:02:42.483255 kernel: ahci 0000:00:17.0: AHCI vers 0001.0301, 32 command slots, 6 Gbps, SATA mode Jul 16 00:02:42.483321 kernel: ahci 0000:00:17.0: 7/7 ports implemented (port mask 0x7f) Jul 16 00:02:42.483385 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Jul 16 00:02:42.424172 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:02:42.521693 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Jul 16 00:02:42.521708 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Jul 16 00:02:42.521716 kernel: scsi host0: ahci Jul 16 00:02:42.521820 kernel: scsi host1: ahci Jul 16 00:02:42.521894 kernel: scsi host2: ahci Jul 16 00:02:42.521954 kernel: scsi host3: ahci Jul 16 00:02:42.522010 kernel: scsi host4: ahci Jul 16 00:02:42.522072 kernel: scsi host5: ahci Jul 16 00:02:42.522129 kernel: scsi host6: ahci Jul 16 00:02:42.521638 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:02:42.678834 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 lpm-pol 0 Jul 16 00:02:42.678853 kernel: igb 0000:03:00.0: added PHC on eth0 Jul 16 00:02:42.678952 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 lpm-pol 0 Jul 16 00:02:42.678961 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 16 00:02:42.679028 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:44 Jul 16 00:02:42.679093 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Jul 16 00:02:42.679157 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 lpm-pol 0 Jul 16 00:02:42.679166 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 16 00:02:42.679228 kernel: igb 0000:04:00.0: added PHC on eth1 Jul 16 00:02:42.679298 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 lpm-pol 0 Jul 16 00:02:42.679306 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Jul 16 00:02:42.679370 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 00:25:90:bd:75:45 Jul 16 00:02:42.679433 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Jul 16 00:02:42.679495 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 lpm-pol 0 Jul 16 00:02:42.679503 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Jul 16 00:02:42.679564 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 lpm-pol 0 Jul 16 00:02:42.679572 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 lpm-pol 0 Jul 16 00:02:42.674205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:02:42.679013 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 16 00:02:42.804833 kernel: mlx5_core 0000:01:00.0: PTM is not supported by PCIe Jul 16 00:02:42.804943 kernel: mlx5_core 0000:01:00.0: firmware version: 14.29.2002 Jul 16 00:02:42.805013 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 16 00:02:42.837928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:02:42.966808 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 16 00:02:42.966825 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 16 00:02:42.973805 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 16 00:02:42.979828 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Jul 16 00:02:42.985805 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 16 00:02:42.991829 kernel: ata7: SATA link down (SStatus 0 SControl 300) Jul 16 00:02:42.997805 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 16 00:02:43.002793 kernel: ata1.00: Model 'Micron_5200_MTFDDAK480TDN', rev ' D1MU020', applying quirks: zeroaftertrim Jul 16 00:02:43.019462 kernel: ata1.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jul 16 00:02:43.020814 kernel: ata2.00: Model 'Micron_5200_MTFDDAK480TDN', rev ' D1MU020', applying quirks: zeroaftertrim Jul 16 00:02:43.036732 kernel: ata2.00: ATA-10: Micron_5200_MTFDDAK480TDN, D1MU020, max UDMA/133 Jul 16 00:02:43.056363 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 16 00:02:43.056379 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 16 00:02:43.056461 kernel: ata1.00: Features: NCQ-prio Jul 16 00:02:43.060837 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Jul 16 00:02:43.060924 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Jul 16 00:02:43.079602 kernel: ata2.00: Features: NCQ-prio Jul 16 00:02:43.089809 kernel: ata1.00: configured for UDMA/133 Jul 16 00:02:43.089827 kernel: ata2.00: configured for UDMA/133 Jul 16 00:02:43.089835 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jul 16 00:02:43.102823 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5200_MTFD U020 PQ: 0 ANSI: 5 Jul 16 00:02:43.117772 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Jul 16 00:02:43.118013 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Jul 16 00:02:43.123770 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 16 00:02:43.123892 kernel: ata1.00: Enabling discard_zeroes_data Jul 16 00:02:43.133607 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Jul 16 00:02:43.133697 kernel: ata2.00: Enabling discard_zeroes_data Jul 16 00:02:43.133707 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 16 00:02:43.133827 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Jul 16 00:02:43.145715 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Jul 16 00:02:43.145868 kernel: sd 0:0:0:0: [sdb] Write Protect is off Jul 16 00:02:43.145974 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Jul 16 00:02:43.146086 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Jul 16 00:02:43.146149 kernel: sd 1:0:0:0: [sda] Write Protect is off Jul 16 00:02:43.146207 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Jul 16 00:02:43.146264 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 16 00:02:43.146320 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Jul 16 00:02:43.146377 kernel: ata2.00: Enabling discard_zeroes_data Jul 16 00:02:43.158472 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Jul 16 00:02:43.158558 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Jul 16 00:02:43.158623 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 16 00:02:43.167597 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Jul 16 00:02:43.172385 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Jul 16 00:02:43.179891 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Jul 16 00:02:43.190197 kernel: ata1.00: Enabling discard_zeroes_data Jul 16 00:02:43.190213 kernel: hub 1-0:1.0: USB hub found Jul 16 00:02:43.263552 kernel: hub 1-0:1.0: 16 ports detected Jul 16 00:02:43.264807 kernel: hub 2-0:1.0: USB hub found Jul 16 00:02:43.269834 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 16 00:02:43.269865 kernel: hub 2-0:1.0: 10 ports detected Jul 16 00:02:43.270060 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Jul 16 00:02:43.275777 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 16 00:02:43.275947 kernel: GPT:9289727 != 937703087 Jul 16 00:02:43.279768 kernel: mlx5_core 0000:01:00.1: PTM is not supported by PCIe Jul 16 00:02:43.279860 kernel: mlx5_core 0000:01:00.1: firmware version: 14.29.2002 Jul 16 00:02:43.279936 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Jul 16 00:02:43.326416 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 16 00:02:43.330273 kernel: GPT:9289727 != 937703087 Jul 16 00:02:43.335682 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 16 00:02:43.340977 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 16 00:02:43.346180 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Jul 16 00:02:43.383472 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5200_MTFDDAK480TDN EFI-SYSTEM. Jul 16 00:02:43.406021 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5200_MTFDDAK480TDN ROOT. Jul 16 00:02:43.421287 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5200_MTFDDAK480TDN USR-A. Jul 16 00:02:43.449847 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5200_MTFDDAK480TDN USR-A. Jul 16 00:02:43.466354 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jul 16 00:02:43.491302 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 16 00:02:43.517839 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Jul 16 00:02:43.529975 disk-uuid[782]: Primary Header is updated. Jul 16 00:02:43.529975 disk-uuid[782]: Secondary Entries is updated. Jul 16 00:02:43.529975 disk-uuid[782]: Secondary Header is updated. Jul 16 00:02:43.579801 kernel: ata2.00: Enabling discard_zeroes_data Jul 16 00:02:43.579814 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Jul 16 00:02:43.579910 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 16 00:02:43.579920 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Jul 16 00:02:43.579989 kernel: ata2.00: Enabling discard_zeroes_data Jul 16 00:02:43.592812 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 16 00:02:43.641111 kernel: hub 1-14:1.0: USB hub found Jul 16 00:02:43.641232 kernel: hub 1-14:1.0: 4 ports detected Jul 16 00:02:43.804927 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jul 16 00:02:43.815771 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Jul 16 00:02:43.815912 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Jul 16 00:02:43.828897 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 16 00:02:43.838332 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 16 00:02:43.867975 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:02:43.877842 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 16 00:02:43.878380 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 16 00:02:43.934957 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Jul 16 00:02:43.941525 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 16 00:02:44.044784 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 16 00:02:44.057653 kernel: usbcore: registered new interface driver usbhid Jul 16 00:02:44.057670 kernel: usbhid: USB HID core driver Jul 16 00:02:44.071772 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Jul 16 00:02:44.145865 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Jul 16 00:02:44.146073 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Jul 16 00:02:44.157745 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Jul 16 00:02:44.565128 kernel: ata2.00: Enabling discard_zeroes_data Jul 16 00:02:44.584790 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 16 00:02:44.584813 disk-uuid[783]: The operation has completed successfully. Jul 16 00:02:44.620952 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 16 00:02:44.621000 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 16 00:02:44.658763 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 16 00:02:44.689782 sh[834]: Success Jul 16 00:02:44.718783 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 16 00:02:44.718804 kernel: device-mapper: uevent: version 1.0.3 Jul 16 00:02:44.728047 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 16 00:02:44.740831 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Jul 16 00:02:44.782020 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 16 00:02:44.792073 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 16 00:02:44.827582 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 16 00:02:44.890861 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 16 00:02:44.890877 kernel: BTRFS: device fsid 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (847) Jul 16 00:02:44.890885 kernel: BTRFS info (device dm-0): first mount of filesystem 5e84ae48-fef7-4576-99b7-f45b3ea9aa4e Jul 16 00:02:44.890892 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 16 00:02:44.890898 kernel: BTRFS info (device dm-0): using free-space-tree Jul 16 00:02:44.896790 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 16 00:02:44.904125 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 16 00:02:44.921952 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 16 00:02:44.922433 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 16 00:02:44.954348 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 16 00:02:45.005811 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (879) Jul 16 00:02:45.023316 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:02:45.023336 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 16 00:02:45.029209 kernel: BTRFS info (device sda6): using free-space-tree Jul 16 00:02:45.030013 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 16 00:02:45.045782 kernel: BTRFS info (device sda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:02:45.060132 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 16 00:02:45.070917 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 16 00:02:45.080159 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 16 00:02:45.127969 systemd-networkd[1017]: lo: Link UP Jul 16 00:02:45.127973 systemd-networkd[1017]: lo: Gained carrier Jul 16 00:02:45.130421 systemd-networkd[1017]: Enumeration completed Jul 16 00:02:45.130465 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 16 00:02:45.131057 systemd-networkd[1017]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:02:45.143938 systemd[1]: Reached target network.target - Network. Jul 16 00:02:45.158693 systemd-networkd[1017]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:02:45.186556 systemd-networkd[1017]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:02:45.204909 unknown[1016]: fetched base config from "system" Jul 16 00:02:45.202738 ignition[1016]: Ignition 2.21.0 Jul 16 00:02:45.204912 unknown[1016]: fetched user config from "system" Jul 16 00:02:45.202744 ignition[1016]: Stage: fetch-offline Jul 16 00:02:45.205981 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 16 00:02:45.202778 ignition[1016]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:02:45.223118 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 16 00:02:45.202785 ignition[1016]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:02:45.223685 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 16 00:02:45.202845 ignition[1016]: parsed url from cmdline: "" Jul 16 00:02:45.202847 ignition[1016]: no config URL provided Jul 16 00:02:45.202850 ignition[1016]: reading system config file "/usr/lib/ignition/user.ign" Jul 16 00:02:45.202877 ignition[1016]: parsing config with SHA512: 29ac2dc1ad7597aceb0e625cbd4208b6d60417aee9a97113164d9f97128c170adf3c583539f593d89adeca5292fa58b99077f2737cacde8252d0b483847f58b9 Jul 16 00:02:45.205096 ignition[1016]: fetch-offline: fetch-offline passed Jul 16 00:02:45.205099 ignition[1016]: POST message to Packet Timeline Jul 16 00:02:45.205101 ignition[1016]: POST Status error: resource requires networking Jul 16 00:02:45.352929 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 16 00:02:45.351842 systemd-networkd[1017]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 16 00:02:45.205131 ignition[1016]: Ignition finished successfully Jul 16 00:02:45.256168 ignition[1035]: Ignition 2.21.0 Jul 16 00:02:45.256171 ignition[1035]: Stage: kargs Jul 16 00:02:45.256260 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:02:45.256266 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:02:45.257300 ignition[1035]: kargs: kargs passed Jul 16 00:02:45.257304 ignition[1035]: POST message to Packet Timeline Jul 16 00:02:45.257322 ignition[1035]: GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:02:45.257601 ignition[1035]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52156->[::1]:53: read: connection refused Jul 16 00:02:45.457830 ignition[1035]: GET https://metadata.packet.net/metadata: attempt #2 Jul 16 00:02:45.458168 ignition[1035]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59050->[::1]:53: read: connection refused Jul 16 00:02:45.531894 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 16 00:02:45.532950 systemd-networkd[1017]: eno1: Link UP Jul 16 00:02:45.533089 systemd-networkd[1017]: eno2: Link UP Jul 16 00:02:45.533217 systemd-networkd[1017]: enp1s0f0np0: Link UP Jul 16 00:02:45.533388 systemd-networkd[1017]: enp1s0f0np0: Gained carrier Jul 16 00:02:45.547201 systemd-networkd[1017]: enp1s0f1np1: Link UP Jul 16 00:02:45.548007 systemd-networkd[1017]: enp1s0f1np1: Gained carrier Jul 16 00:02:45.579956 systemd-networkd[1017]: enp1s0f0np0: DHCPv4 address 147.75.203.227/31, gateway 147.75.203.226 acquired from 145.40.83.140 Jul 16 00:02:45.858597 ignition[1035]: GET https://metadata.packet.net/metadata: attempt #3 Jul 16 00:02:45.859909 ignition[1035]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50016->[::1]:53: read: connection refused Jul 16 00:02:46.660341 ignition[1035]: GET https://metadata.packet.net/metadata: attempt #4 Jul 16 00:02:46.661464 ignition[1035]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46174->[::1]:53: read: connection refused Jul 16 00:02:46.676065 systemd-networkd[1017]: enp1s0f0np0: Gained IPv6LL Jul 16 00:02:47.124444 systemd-networkd[1017]: enp1s0f1np1: Gained IPv6LL Jul 16 00:02:48.263157 ignition[1035]: GET https://metadata.packet.net/metadata: attempt #5 Jul 16 00:02:48.264280 ignition[1035]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44141->[::1]:53: read: connection refused Jul 16 00:02:51.467871 ignition[1035]: GET https://metadata.packet.net/metadata: attempt #6 Jul 16 00:02:52.454196 ignition[1035]: GET result: OK Jul 16 00:02:53.027523 ignition[1035]: Ignition finished successfully Jul 16 00:02:53.033582 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 16 00:02:53.043707 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 16 00:02:53.089043 ignition[1054]: Ignition 2.21.0 Jul 16 00:02:53.089055 ignition[1054]: Stage: disks Jul 16 00:02:53.089270 ignition[1054]: no configs at "/usr/lib/ignition/base.d" Jul 16 00:02:53.089285 ignition[1054]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:02:53.090534 ignition[1054]: disks: disks passed Jul 16 00:02:53.090537 ignition[1054]: POST message to Packet Timeline Jul 16 00:02:53.090552 ignition[1054]: GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:02:54.080117 ignition[1054]: GET result: OK Jul 16 00:02:55.162647 ignition[1054]: Ignition finished successfully Jul 16 00:02:55.167499 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 16 00:02:55.179961 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 16 00:02:55.197022 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 16 00:02:55.215979 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 16 00:02:55.235071 systemd[1]: Reached target sysinit.target - System Initialization. Jul 16 00:02:55.253077 systemd[1]: Reached target basic.target - Basic System. Jul 16 00:02:55.272616 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 16 00:02:55.324771 systemd-fsck[1075]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 16 00:02:55.334186 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 16 00:02:55.348194 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 16 00:02:55.452540 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 16 00:02:55.466010 kernel: EXT4-fs (sda9): mounted filesystem e7011b63-42ae-44ea-90bf-c826e39292b2 r/w with ordered data mode. Quota mode: none. Jul 16 00:02:55.460152 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 16 00:02:55.476900 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 16 00:02:55.504537 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 16 00:02:55.549891 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 (8:6) scanned by mount (1084) Jul 16 00:02:55.549910 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:02:55.549919 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 16 00:02:55.549926 kernel: BTRFS info (device sda6): using free-space-tree Jul 16 00:02:55.512579 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 16 00:02:55.556114 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Jul 16 00:02:55.588033 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 16 00:02:55.588129 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 16 00:02:55.615804 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 16 00:02:55.638964 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 16 00:02:55.646698 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 16 00:02:55.654161 coreos-metadata[1102]: Jul 16 00:02:55.642 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:02:55.681837 coreos-metadata[1086]: Jul 16 00:02:55.642 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:02:55.705238 initrd-setup-root[1116]: cut: /sysroot/etc/passwd: No such file or directory Jul 16 00:02:55.713908 initrd-setup-root[1123]: cut: /sysroot/etc/group: No such file or directory Jul 16 00:02:55.723008 initrd-setup-root[1130]: cut: /sysroot/etc/shadow: No such file or directory Jul 16 00:02:55.731854 initrd-setup-root[1137]: cut: /sysroot/etc/gshadow: No such file or directory Jul 16 00:02:55.764790 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 16 00:02:55.774703 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 16 00:02:55.783590 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 16 00:02:55.819568 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 16 00:02:55.835905 kernel: BTRFS info (device sda6): last unmount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:02:55.843502 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 16 00:02:55.843591 ignition[1205]: INFO : Ignition 2.21.0 Jul 16 00:02:55.843591 ignition[1205]: INFO : Stage: mount Jul 16 00:02:55.873018 ignition[1205]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:02:55.873018 ignition[1205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:02:55.873018 ignition[1205]: INFO : mount: mount passed Jul 16 00:02:55.873018 ignition[1205]: INFO : POST message to Packet Timeline Jul 16 00:02:55.873018 ignition[1205]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:02:56.742032 coreos-metadata[1102]: Jul 16 00:02:56.741 INFO Fetch successful Jul 16 00:02:56.820939 systemd[1]: flatcar-static-network.service: Deactivated successfully. Jul 16 00:02:56.820994 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Jul 16 00:02:57.424726 ignition[1205]: INFO : GET result: OK Jul 16 00:02:57.525422 coreos-metadata[1086]: Jul 16 00:02:57.525 INFO Fetch successful Jul 16 00:02:57.586512 coreos-metadata[1086]: Jul 16 00:02:57.586 INFO wrote hostname ci-4372.0.1-n-fdc39dabbd to /sysroot/etc/hostname Jul 16 00:02:57.588131 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 16 00:02:57.971635 ignition[1205]: INFO : Ignition finished successfully Jul 16 00:02:57.975836 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 16 00:02:57.991675 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 16 00:02:58.027717 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 16 00:02:58.080656 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 (8:6) scanned by mount (1231) Jul 16 00:02:58.080685 kernel: BTRFS info (device sda6): first mount of filesystem 00a9d8f6-6c10-4cef-8e74-b38121477a0b Jul 16 00:02:58.088750 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 16 00:02:58.094644 kernel: BTRFS info (device sda6): using free-space-tree Jul 16 00:02:58.099250 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 16 00:02:58.131162 ignition[1248]: INFO : Ignition 2.21.0 Jul 16 00:02:58.131162 ignition[1248]: INFO : Stage: files Jul 16 00:02:58.143998 ignition[1248]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:02:58.143998 ignition[1248]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:02:58.143998 ignition[1248]: DEBUG : files: compiled without relabeling support, skipping Jul 16 00:02:58.143998 ignition[1248]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 16 00:02:58.143998 ignition[1248]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 16 00:02:58.143998 ignition[1248]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 16 00:02:58.143998 ignition[1248]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 16 00:02:58.143998 ignition[1248]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 16 00:02:58.143998 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 16 00:02:58.143998 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 16 00:02:58.134883 unknown[1248]: wrote ssh authorized keys file for user: core Jul 16 00:02:58.272042 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 16 00:02:58.498441 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 16 00:02:58.498441 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 16 00:02:58.530994 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 16 00:02:58.530994 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 16 00:02:58.530994 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 16 00:02:58.530994 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 16 00:02:58.530994 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 16 00:02:58.530994 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 16 00:02:58.530994 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 16 00:02:58.530994 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 16 00:02:58.530994 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 16 00:02:58.530994 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 00:02:58.530994 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 00:02:58.530994 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 00:02:58.530994 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 16 00:02:59.540197 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 16 00:02:59.972683 ignition[1248]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 16 00:02:59.972683 ignition[1248]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 16 00:03:00.001011 ignition[1248]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 16 00:03:00.001011 ignition[1248]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 16 00:03:00.001011 ignition[1248]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 16 00:03:00.001011 ignition[1248]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 16 00:03:00.001011 ignition[1248]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 16 00:03:00.001011 ignition[1248]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 16 00:03:00.001011 ignition[1248]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 16 00:03:00.001011 ignition[1248]: INFO : files: files passed Jul 16 00:03:00.001011 ignition[1248]: INFO : POST message to Packet Timeline Jul 16 00:03:00.001011 ignition[1248]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:03:01.742710 ignition[1248]: INFO : GET result: OK Jul 16 00:03:02.468672 ignition[1248]: INFO : Ignition finished successfully Jul 16 00:03:02.472327 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 16 00:03:02.489298 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 16 00:03:02.512297 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 16 00:03:02.522118 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 16 00:03:02.522176 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 16 00:03:02.547876 initrd-setup-root-after-ignition[1288]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:03:02.547876 initrd-setup-root-after-ignition[1288]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:03:02.593001 initrd-setup-root-after-ignition[1292]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 16 00:03:02.556496 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 16 00:03:02.571116 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 16 00:03:02.604318 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 16 00:03:02.697884 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 16 00:03:02.697938 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 16 00:03:02.715119 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 16 00:03:02.726007 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 16 00:03:02.751094 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 16 00:03:02.752603 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 16 00:03:02.814170 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 16 00:03:02.827247 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 16 00:03:02.898048 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 16 00:03:02.908599 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:03:02.927684 systemd[1]: Stopped target timers.target - Timer Units. Jul 16 00:03:02.944508 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 16 00:03:02.944947 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 16 00:03:02.980179 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 16 00:03:02.990461 systemd[1]: Stopped target basic.target - Basic System. Jul 16 00:03:03.008460 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 16 00:03:03.025602 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 16 00:03:03.046464 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 16 00:03:03.055756 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 16 00:03:03.073650 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 16 00:03:03.091791 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 16 00:03:03.119507 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 16 00:03:03.138626 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 16 00:03:03.157473 systemd[1]: Stopped target swap.target - Swaps. Jul 16 00:03:03.174381 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 16 00:03:03.174814 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 16 00:03:03.199493 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 16 00:03:03.218507 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 16 00:03:03.227503 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 16 00:03:03.227940 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 16 00:03:03.257351 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 16 00:03:03.257750 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 16 00:03:03.287598 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 16 00:03:03.288075 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 16 00:03:03.305662 systemd[1]: Stopped target paths.target - Path Units. Jul 16 00:03:03.322337 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 16 00:03:03.322846 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 16 00:03:03.342368 systemd[1]: Stopped target slices.target - Slice Units. Jul 16 00:03:03.359382 systemd[1]: Stopped target sockets.target - Socket Units. Jul 16 00:03:03.376360 systemd[1]: iscsid.socket: Deactivated successfully. Jul 16 00:03:03.376652 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 16 00:03:03.394402 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 16 00:03:03.394682 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 16 00:03:03.416497 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 16 00:03:03.416938 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 16 00:03:03.543042 ignition[1313]: INFO : Ignition 2.21.0 Jul 16 00:03:03.543042 ignition[1313]: INFO : Stage: umount Jul 16 00:03:03.543042 ignition[1313]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 16 00:03:03.543042 ignition[1313]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Jul 16 00:03:03.543042 ignition[1313]: INFO : umount: umount passed Jul 16 00:03:03.543042 ignition[1313]: INFO : POST message to Packet Timeline Jul 16 00:03:03.543042 ignition[1313]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Jul 16 00:03:03.433429 systemd[1]: ignition-files.service: Deactivated successfully. Jul 16 00:03:03.433852 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 16 00:03:03.449430 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 16 00:03:03.449854 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 16 00:03:03.468754 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 16 00:03:03.481033 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 16 00:03:03.481171 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 16 00:03:03.501402 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 16 00:03:03.507989 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 16 00:03:03.508149 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 16 00:03:03.533049 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 16 00:03:03.533169 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 16 00:03:03.574510 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 16 00:03:03.575415 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 16 00:03:03.575510 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 16 00:03:03.601989 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 16 00:03:03.602239 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 16 00:03:05.198263 ignition[1313]: INFO : GET result: OK Jul 16 00:03:05.608292 ignition[1313]: INFO : Ignition finished successfully Jul 16 00:03:05.612591 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 16 00:03:05.612941 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 16 00:03:05.626932 systemd[1]: Stopped target network.target - Network. Jul 16 00:03:05.641142 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 16 00:03:05.641348 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 16 00:03:05.659193 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 16 00:03:05.659367 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 16 00:03:05.675182 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 16 00:03:05.675368 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 16 00:03:05.692181 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 16 00:03:05.692355 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 16 00:03:05.710165 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 16 00:03:05.710350 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 16 00:03:05.726572 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 16 00:03:05.744261 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 16 00:03:05.760822 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 16 00:03:05.761135 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 16 00:03:05.784192 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 16 00:03:05.784751 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 16 00:03:05.785068 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 16 00:03:05.801586 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 16 00:03:05.803735 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 16 00:03:05.816093 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 16 00:03:05.816208 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 16 00:03:05.837621 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 16 00:03:05.850958 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 16 00:03:05.850990 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 16 00:03:05.870027 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 16 00:03:05.870084 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 16 00:03:05.880390 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 16 00:03:05.880452 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 16 00:03:05.906102 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 16 00:03:05.906236 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 16 00:03:05.926789 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 16 00:03:05.951380 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 16 00:03:05.951579 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 16 00:03:05.952616 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 16 00:03:05.953017 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 16 00:03:05.968740 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 16 00:03:05.968783 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 16 00:03:05.977028 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 16 00:03:05.977066 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 16 00:03:05.996114 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 16 00:03:05.996185 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 16 00:03:06.047050 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 16 00:03:06.047247 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 16 00:03:06.075302 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 16 00:03:06.075499 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 16 00:03:06.114241 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 16 00:03:06.130895 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 16 00:03:06.131089 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 16 00:03:06.392960 systemd-journald[300]: Received SIGTERM from PID 1 (systemd). Jul 16 00:03:06.149971 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 16 00:03:06.150031 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 16 00:03:06.170035 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 16 00:03:06.170086 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:03:06.192136 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 16 00:03:06.192200 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 16 00:03:06.192250 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 16 00:03:06.192719 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 16 00:03:06.192828 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 16 00:03:06.245642 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 16 00:03:06.245791 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 16 00:03:06.248533 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 16 00:03:06.275187 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 16 00:03:06.330917 systemd[1]: Switching root. Jul 16 00:03:06.518883 systemd-journald[300]: Journal stopped Jul 16 00:03:08.208926 kernel: SELinux: policy capability network_peer_controls=1 Jul 16 00:03:08.208941 kernel: SELinux: policy capability open_perms=1 Jul 16 00:03:08.208949 kernel: SELinux: policy capability extended_socket_class=1 Jul 16 00:03:08.208955 kernel: SELinux: policy capability always_check_network=0 Jul 16 00:03:08.208960 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 16 00:03:08.208965 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 16 00:03:08.208971 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 16 00:03:08.208976 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 16 00:03:08.208981 kernel: SELinux: policy capability userspace_initial_context=0 Jul 16 00:03:08.208987 kernel: audit: type=1403 audit(1752624186.629:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 16 00:03:08.208994 systemd[1]: Successfully loaded SELinux policy in 84.633ms. Jul 16 00:03:08.209001 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.486ms. Jul 16 00:03:08.209007 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 16 00:03:08.209013 systemd[1]: Detected architecture x86-64. Jul 16 00:03:08.209021 systemd[1]: Detected first boot. Jul 16 00:03:08.209027 systemd[1]: Hostname set to . Jul 16 00:03:08.209033 systemd[1]: Initializing machine ID from random generator. Jul 16 00:03:08.209039 zram_generator::config[1368]: No configuration found. Jul 16 00:03:08.209046 systemd[1]: Populated /etc with preset unit settings. Jul 16 00:03:08.209052 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 16 00:03:08.209060 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 16 00:03:08.209066 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 16 00:03:08.209072 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 16 00:03:08.209079 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 16 00:03:08.209085 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 16 00:03:08.209091 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 16 00:03:08.209098 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 16 00:03:08.209105 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 16 00:03:08.209112 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 16 00:03:08.209118 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 16 00:03:08.209124 systemd[1]: Created slice user.slice - User and Session Slice. Jul 16 00:03:08.209130 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 16 00:03:08.209137 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 16 00:03:08.209143 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 16 00:03:08.209149 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 16 00:03:08.209156 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 16 00:03:08.209163 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 16 00:03:08.209169 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Jul 16 00:03:08.209176 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 16 00:03:08.209182 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 16 00:03:08.209190 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 16 00:03:08.209196 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 16 00:03:08.209203 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 16 00:03:08.209210 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 16 00:03:08.209217 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 16 00:03:08.209223 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 16 00:03:08.209229 systemd[1]: Reached target slices.target - Slice Units. Jul 16 00:03:08.209236 systemd[1]: Reached target swap.target - Swaps. Jul 16 00:03:08.209242 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 16 00:03:08.209248 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 16 00:03:08.209255 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 16 00:03:08.209262 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 16 00:03:08.209269 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 16 00:03:08.209275 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 16 00:03:08.209282 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 16 00:03:08.209288 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 16 00:03:08.209296 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 16 00:03:08.209302 systemd[1]: Mounting media.mount - External Media Directory... Jul 16 00:03:08.209309 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 00:03:08.209316 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 16 00:03:08.209322 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 16 00:03:08.209329 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 16 00:03:08.209336 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 16 00:03:08.209343 systemd[1]: Reached target machines.target - Containers. Jul 16 00:03:08.209350 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 16 00:03:08.209357 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 16 00:03:08.209364 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 16 00:03:08.209370 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 16 00:03:08.209377 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 16 00:03:08.209383 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 16 00:03:08.209390 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 16 00:03:08.209396 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 16 00:03:08.209402 kernel: ACPI: bus type drm_connector registered Jul 16 00:03:08.209409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 16 00:03:08.209416 kernel: fuse: init (API version 7.41) Jul 16 00:03:08.209422 kernel: loop: module loaded Jul 16 00:03:08.209428 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 16 00:03:08.209435 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 16 00:03:08.209441 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 16 00:03:08.209448 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 16 00:03:08.209454 systemd[1]: Stopped systemd-fsck-usr.service. Jul 16 00:03:08.209462 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 16 00:03:08.209469 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 16 00:03:08.209475 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 16 00:03:08.209482 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 16 00:03:08.209498 systemd-journald[1471]: Collecting audit messages is disabled. Jul 16 00:03:08.209515 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 16 00:03:08.209523 systemd-journald[1471]: Journal started Jul 16 00:03:08.209536 systemd-journald[1471]: Runtime Journal (/run/log/journal/25a4754564bd461abd486cb55cf3c2a1) is 8M, max 640.1M, 632.1M free. Jul 16 00:03:07.060269 systemd[1]: Queued start job for default target multi-user.target. Jul 16 00:03:07.075822 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 16 00:03:07.076110 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 16 00:03:08.238888 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 16 00:03:08.262832 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 16 00:03:08.283029 systemd[1]: verity-setup.service: Deactivated successfully. Jul 16 00:03:08.283097 systemd[1]: Stopped verity-setup.service. Jul 16 00:03:08.307809 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 00:03:08.315798 systemd[1]: Started systemd-journald.service - Journal Service. Jul 16 00:03:08.324210 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 16 00:03:08.332936 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 16 00:03:08.342056 systemd[1]: Mounted media.mount - External Media Directory. Jul 16 00:03:08.351031 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 16 00:03:08.360026 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 16 00:03:08.369016 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 16 00:03:08.378097 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 16 00:03:08.388119 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 16 00:03:08.398120 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 16 00:03:08.398258 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 16 00:03:08.408216 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 00:03:08.408385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 16 00:03:08.418239 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 16 00:03:08.418462 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 16 00:03:08.428468 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 00:03:08.428837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 16 00:03:08.439699 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 16 00:03:08.440217 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 16 00:03:08.449704 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 00:03:08.450218 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 16 00:03:08.459855 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 16 00:03:08.469896 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 16 00:03:08.480789 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 16 00:03:08.491843 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 16 00:03:08.502845 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 16 00:03:08.521862 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 16 00:03:08.532725 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 16 00:03:08.557283 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 16 00:03:08.565970 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 16 00:03:08.566009 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 16 00:03:08.577304 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 16 00:03:08.588914 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 16 00:03:08.599038 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 16 00:03:08.614702 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 16 00:03:08.632080 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 16 00:03:08.641904 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 00:03:08.642562 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 16 00:03:08.647666 systemd-journald[1471]: Time spent on flushing to /var/log/journal/25a4754564bd461abd486cb55cf3c2a1 is 12.412ms for 1384 entries. Jul 16 00:03:08.647666 systemd-journald[1471]: System Journal (/var/log/journal/25a4754564bd461abd486cb55cf3c2a1) is 8M, max 195.6M, 187.6M free. Jul 16 00:03:08.670772 systemd-journald[1471]: Received client request to flush runtime journal. Jul 16 00:03:08.658882 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 16 00:03:08.659580 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 16 00:03:08.668574 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 16 00:03:08.679619 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 16 00:03:08.691823 kernel: loop0: detected capacity change from 0 to 146240 Jul 16 00:03:08.694886 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 16 00:03:08.704855 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 16 00:03:08.718804 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 16 00:03:08.722500 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 16 00:03:08.733043 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 16 00:03:08.743019 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 16 00:03:08.753798 kernel: loop1: detected capacity change from 0 to 221472 Jul 16 00:03:08.757963 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 16 00:03:08.768219 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 16 00:03:08.778573 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 16 00:03:08.800043 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 16 00:03:08.819779 kernel: loop2: detected capacity change from 0 to 8 Jul 16 00:03:08.822264 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 16 00:03:08.822800 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 16 00:03:08.824400 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jul 16 00:03:08.824410 systemd-tmpfiles[1522]: ACLs are not supported, ignoring. Jul 16 00:03:08.833018 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 16 00:03:08.859773 kernel: loop3: detected capacity change from 0 to 113872 Jul 16 00:03:08.915777 kernel: loop4: detected capacity change from 0 to 146240 Jul 16 00:03:08.938772 kernel: loop5: detected capacity change from 0 to 221472 Jul 16 00:03:08.962803 kernel: loop6: detected capacity change from 0 to 8 Jul 16 00:03:08.962847 kernel: loop7: detected capacity change from 0 to 113872 Jul 16 00:03:08.978598 (sd-merge)[1528]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Jul 16 00:03:08.978841 (sd-merge)[1528]: Merged extensions into '/usr'. Jul 16 00:03:08.982163 systemd[1]: Reload requested from client PID 1506 ('systemd-sysext') (unit systemd-sysext.service)... Jul 16 00:03:08.982171 systemd[1]: Reloading... Jul 16 00:03:08.989218 ldconfig[1501]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 16 00:03:09.006832 zram_generator::config[1554]: No configuration found. Jul 16 00:03:09.065519 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:03:09.125959 systemd[1]: Reloading finished in 143 ms. Jul 16 00:03:09.153719 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 16 00:03:09.163148 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 16 00:03:09.172980 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 16 00:03:09.193574 systemd[1]: Starting ensure-sysext.service... Jul 16 00:03:09.200583 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 16 00:03:09.211682 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 16 00:03:09.218883 systemd-tmpfiles[1613]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 16 00:03:09.219089 systemd-tmpfiles[1613]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 16 00:03:09.219444 systemd-tmpfiles[1613]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 16 00:03:09.219589 systemd-tmpfiles[1613]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 16 00:03:09.220097 systemd-tmpfiles[1613]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 16 00:03:09.220247 systemd-tmpfiles[1613]: ACLs are not supported, ignoring. Jul 16 00:03:09.220278 systemd-tmpfiles[1613]: ACLs are not supported, ignoring. Jul 16 00:03:09.222067 systemd-tmpfiles[1613]: Detected autofs mount point /boot during canonicalization of boot. Jul 16 00:03:09.222071 systemd-tmpfiles[1613]: Skipping /boot Jul 16 00:03:09.227382 systemd-tmpfiles[1613]: Detected autofs mount point /boot during canonicalization of boot. Jul 16 00:03:09.227386 systemd-tmpfiles[1613]: Skipping /boot Jul 16 00:03:09.237209 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 16 00:03:09.250350 systemd-udevd[1614]: Using default interface naming scheme 'v255'. Jul 16 00:03:09.250923 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 16 00:03:09.266393 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 16 00:03:09.277839 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 16 00:03:09.292219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 16 00:03:09.300893 augenrules[1723]: No rules Jul 16 00:03:09.302956 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 16 00:03:09.314776 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Jul 16 00:03:09.314857 kernel: ACPI: button: Sleep Button [SLPB] Jul 16 00:03:09.328697 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 16 00:03:09.337780 kernel: IPMI message handler: version 39.2 Jul 16 00:03:09.337829 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Jul 16 00:03:09.338069 kernel: ACPI: button: Power Button [PWRF] Jul 16 00:03:09.340921 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 16 00:03:09.351798 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Jul 16 00:03:09.357816 kernel: mousedev: PS/2 mouse device common for all mice Jul 16 00:03:09.383467 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Jul 16 00:03:09.383708 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Jul 16 00:03:09.386550 systemd[1]: audit-rules.service: Deactivated successfully. Jul 16 00:03:09.386735 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 16 00:03:09.392778 kernel: ipmi device interface Jul 16 00:03:09.400379 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 16 00:03:09.412155 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 16 00:03:09.424569 systemd[1]: Reload requested from client PID 1612 ('systemctl') (unit ensure-sysext.service)... Jul 16 00:03:09.424579 systemd[1]: Reloading... Jul 16 00:03:09.456776 zram_generator::config[1783]: No configuration found. Jul 16 00:03:09.461776 kernel: iTCO_vendor_support: vendor-support=0 Jul 16 00:03:09.461821 kernel: MACsec IEEE 802.1AE Jul 16 00:03:09.461834 kernel: ipmi_si: IPMI System Interface driver Jul 16 00:03:09.461847 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Jul 16 00:03:09.461941 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Jul 16 00:03:09.461950 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Jul 16 00:03:09.461959 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Jul 16 00:03:09.462037 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Jul 16 00:03:09.462105 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Jul 16 00:03:09.462170 kernel: ipmi_si: Adding ACPI-specified kcs state machine Jul 16 00:03:09.462182 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Jul 16 00:03:09.535778 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Jul 16 00:03:09.566973 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Jul 16 00:03:09.567120 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Jul 16 00:03:09.567771 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Jul 16 00:03:09.599586 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:03:09.604724 kernel: intel_rapl_common: Found RAPL domain package Jul 16 00:03:09.604759 kernel: intel_rapl_common: Found RAPL domain core Jul 16 00:03:09.610076 kernel: intel_rapl_common: Found RAPL domain dram Jul 16 00:03:09.647803 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Jul 16 00:03:09.655771 kernel: ipmi_ssif: IPMI SSIF Interface driver Jul 16 00:03:09.683679 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5200_MTFDDAK480TDN OEM. Jul 16 00:03:09.693972 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Jul 16 00:03:09.694248 systemd[1]: Reloading finished in 269 ms. Jul 16 00:03:09.726690 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 16 00:03:09.743918 systemd[1]: Finished ensure-sysext.service. Jul 16 00:03:09.767937 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jul 16 00:03:09.776853 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 00:03:09.777538 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 16 00:03:09.784925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 16 00:03:09.795188 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 16 00:03:09.801457 augenrules[1851]: /sbin/augenrules: No change Jul 16 00:03:09.804749 augenrules[1869]: No rules Jul 16 00:03:09.818012 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 16 00:03:09.845065 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 16 00:03:09.864980 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 16 00:03:09.873917 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 16 00:03:09.883129 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 16 00:03:09.892799 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 16 00:03:09.893703 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 16 00:03:09.902707 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 16 00:03:09.903295 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 16 00:03:09.904105 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 16 00:03:09.920504 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 16 00:03:09.927851 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 16 00:03:09.927875 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 16 00:03:09.928337 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 16 00:03:09.946570 systemd[1]: audit-rules.service: Deactivated successfully. Jul 16 00:03:09.946739 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 16 00:03:09.946978 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 16 00:03:09.947112 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 16 00:03:09.947318 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 16 00:03:09.947449 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 16 00:03:09.947649 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 16 00:03:09.947784 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 16 00:03:09.947986 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 16 00:03:09.948115 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 16 00:03:09.948364 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 16 00:03:09.948591 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 16 00:03:09.951360 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 16 00:03:09.951403 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 16 00:03:09.971308 systemd-resolved[1710]: Positive Trust Anchors: Jul 16 00:03:09.971317 systemd-resolved[1710]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 16 00:03:09.971357 systemd-resolved[1710]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 16 00:03:09.974920 systemd-resolved[1710]: Using system hostname 'ci-4372.0.1-n-fdc39dabbd'. Jul 16 00:03:09.975693 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 16 00:03:09.985243 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 16 00:03:10.002561 systemd-networkd[1878]: lo: Link UP Jul 16 00:03:10.002565 systemd-networkd[1878]: lo: Gained carrier Jul 16 00:03:10.005133 systemd-networkd[1878]: bond0: netdev ready Jul 16 00:03:10.006134 systemd-networkd[1878]: Enumeration completed Jul 16 00:03:10.010586 systemd-networkd[1878]: enp1s0f0np0: Configuring with /etc/systemd/network/10-b8:59:9f:de:85:2c.network. Jul 16 00:03:10.023948 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 16 00:03:10.034079 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 16 00:03:10.043009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 16 00:03:10.054316 systemd[1]: Reached target network.target - Network. Jul 16 00:03:10.061883 systemd[1]: Reached target sysinit.target - System Initialization. Jul 16 00:03:10.071002 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 16 00:03:10.080913 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 16 00:03:10.090916 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 16 00:03:10.101030 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 16 00:03:10.110985 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 16 00:03:10.111072 systemd[1]: Reached target paths.target - Path Units. Jul 16 00:03:10.119988 systemd[1]: Reached target time-set.target - System Time Set. Jul 16 00:03:10.129346 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 16 00:03:10.138222 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 16 00:03:10.147993 systemd[1]: Reached target timers.target - Timer Units. Jul 16 00:03:10.159085 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 16 00:03:10.170923 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 16 00:03:10.183397 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 16 00:03:10.198472 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 16 00:03:10.207890 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 16 00:03:10.221241 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 16 00:03:10.231451 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 16 00:03:10.242241 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 16 00:03:10.251521 systemd[1]: Reached target sockets.target - Socket Units. Jul 16 00:03:10.259871 systemd[1]: Reached target basic.target - Basic System. Jul 16 00:03:10.266978 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 16 00:03:10.267004 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 16 00:03:10.267899 systemd[1]: Starting containerd.service - containerd container runtime... Jul 16 00:03:10.286898 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 16 00:03:10.297795 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 16 00:03:10.306751 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 16 00:03:10.315842 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Jul 16 00:03:10.329795 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Jul 16 00:03:10.330795 systemd-networkd[1878]: enp1s0f1np1: Configuring with /etc/systemd/network/10-b8:59:9f:de:85:2d.network. Jul 16 00:03:10.339328 coreos-metadata[1914]: Jul 16 00:03:10.339 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:03:10.341071 coreos-metadata[1914]: Jul 16 00:03:10.341 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Jul 16 00:03:10.348143 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 16 00:03:10.358401 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 16 00:03:10.362214 jq[1919]: false Jul 16 00:03:10.366942 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 16 00:03:10.368482 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 16 00:03:10.377557 extend-filesystems[1921]: Found /dev/sda6 Jul 16 00:03:10.384885 extend-filesystems[1921]: Found /dev/sda9 Jul 16 00:03:10.384885 extend-filesystems[1921]: Checking size of /dev/sda9 Jul 16 00:03:10.411833 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Jul 16 00:03:10.379238 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 16 00:03:10.388943 oslogin_cache_refresh[1922]: Refreshing passwd entry cache Jul 16 00:03:10.412017 extend-filesystems[1921]: Resized partition /dev/sda9 Jul 16 00:03:10.385597 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 16 00:03:10.390078 oslogin_cache_refresh[1922]: Failure getting users, quitting Jul 16 00:03:10.427948 extend-filesystems[1933]: resize2fs 1.47.2 (1-Jan-2025) Jul 16 00:03:10.436726 google_oslogin_nss_cache[1922]: oslogin_cache_refresh[1922]: Refreshing passwd entry cache Jul 16 00:03:10.436726 google_oslogin_nss_cache[1922]: oslogin_cache_refresh[1922]: Failure getting users, quitting Jul 16 00:03:10.436726 google_oslogin_nss_cache[1922]: oslogin_cache_refresh[1922]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 16 00:03:10.436726 google_oslogin_nss_cache[1922]: oslogin_cache_refresh[1922]: Refreshing group entry cache Jul 16 00:03:10.436726 google_oslogin_nss_cache[1922]: oslogin_cache_refresh[1922]: Failure getting groups, quitting Jul 16 00:03:10.436726 google_oslogin_nss_cache[1922]: oslogin_cache_refresh[1922]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 16 00:03:10.412735 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 16 00:03:10.390100 oslogin_cache_refresh[1922]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 16 00:03:10.390118 oslogin_cache_refresh[1922]: Refreshing group entry cache Jul 16 00:03:10.390379 oslogin_cache_refresh[1922]: Failure getting groups, quitting Jul 16 00:03:10.390383 oslogin_cache_refresh[1922]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 16 00:03:10.437350 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 16 00:03:10.454354 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 16 00:03:10.463909 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Jul 16 00:03:10.472807 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Jul 16 00:03:10.483339 systemd-networkd[1878]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Jul 16 00:03:10.483769 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Jul 16 00:03:10.484452 systemd-networkd[1878]: enp1s0f0np0: Link UP Jul 16 00:03:10.484573 systemd-networkd[1878]: enp1s0f0np0: Gained carrier Jul 16 00:03:10.493533 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Jul 16 00:03:10.493544 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 16 00:03:10.493984 systemd[1]: Starting update-engine.service - Update Engine... Jul 16 00:03:10.499874 systemd-networkd[1878]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:59:9f:de:85:2c.network. Jul 16 00:03:10.500010 systemd-networkd[1878]: enp1s0f1np1: Link UP Jul 16 00:03:10.500130 systemd-networkd[1878]: enp1s0f1np1: Gained carrier Jul 16 00:03:10.504997 systemd-logind[1947]: Watching system buttons on /dev/input/event3 (Power Button) Jul 16 00:03:10.505008 systemd-logind[1947]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 16 00:03:10.505018 systemd-logind[1947]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Jul 16 00:03:10.505331 systemd-logind[1947]: New seat seat0. Jul 16 00:03:10.506053 sshd_keygen[1950]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 16 00:03:10.513987 systemd-networkd[1878]: bond0: Link UP Jul 16 00:03:10.514154 systemd-networkd[1878]: bond0: Gained carrier Jul 16 00:03:10.514290 systemd-timesyncd[1880]: Network configuration changed, trying to establish connection. Jul 16 00:03:10.514555 systemd-timesyncd[1880]: Network configuration changed, trying to establish connection. Jul 16 00:03:10.514685 systemd-timesyncd[1880]: Network configuration changed, trying to establish connection. Jul 16 00:03:10.514731 systemd-timesyncd[1880]: Network configuration changed, trying to establish connection. Jul 16 00:03:10.514962 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 16 00:03:10.521930 update_engine[1952]: I20250716 00:03:10.521894 1952 main.cc:92] Flatcar Update Engine starting Jul 16 00:03:10.526435 systemd[1]: Started systemd-logind.service - User Login Management. Jul 16 00:03:10.527738 jq[1953]: true Jul 16 00:03:10.535087 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 16 00:03:10.545353 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 16 00:03:10.554966 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 16 00:03:10.555074 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 16 00:03:10.555215 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 16 00:03:10.562884 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 16 00:03:10.572065 systemd[1]: motdgen.service: Deactivated successfully. Jul 16 00:03:10.572173 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 16 00:03:10.581334 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 16 00:03:10.581441 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 16 00:03:10.602983 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 25000 Mbps full duplex Jul 16 00:03:10.603010 kernel: bond0: active interface up! Jul 16 00:03:10.604023 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 16 00:03:10.618525 (ntainerd)[1964]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 16 00:03:10.619885 jq[1963]: true Jul 16 00:03:10.628648 tar[1962]: linux-amd64/helm Jul 16 00:03:10.632205 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Jul 16 00:03:10.632327 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Jul 16 00:03:10.638983 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 16 00:03:10.651387 systemd[1]: issuegen.service: Deactivated successfully. Jul 16 00:03:10.651503 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 16 00:03:10.660941 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 16 00:03:10.661171 dbus-daemon[1915]: [system] SELinux support is enabled Jul 16 00:03:10.663061 update_engine[1952]: I20250716 00:03:10.663031 1952 update_check_scheduler.cc:74] Next update check in 2m8s Jul 16 00:03:10.669847 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 16 00:03:10.679335 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 16 00:03:10.679348 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 16 00:03:10.680171 dbus-daemon[1915]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 16 00:03:10.690828 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 16 00:03:10.690841 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 16 00:03:10.700538 bash[1995]: Updated "/home/core/.ssh/authorized_keys" Jul 16 00:03:10.701085 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 16 00:03:10.718904 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 16 00:03:10.719809 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 25000 Mbps full duplex Jul 16 00:03:10.729649 systemd[1]: Started update-engine.service - Update Engine. Jul 16 00:03:10.738865 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 16 00:03:10.748550 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Jul 16 00:03:10.758924 systemd[1]: Reached target getty.target - Login Prompts. Jul 16 00:03:10.768837 systemd[1]: Starting sshkeys.service... Jul 16 00:03:10.783462 containerd[1964]: time="2025-07-16T00:03:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 16 00:03:10.784197 containerd[1964]: time="2025-07-16T00:03:10.784182178Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 16 00:03:10.788092 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 16 00:03:10.789166 containerd[1964]: time="2025-07-16T00:03:10.789142627Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.948µs" Jul 16 00:03:10.789166 containerd[1964]: time="2025-07-16T00:03:10.789163377Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 16 00:03:10.789223 containerd[1964]: time="2025-07-16T00:03:10.789173387Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 16 00:03:10.789256 containerd[1964]: time="2025-07-16T00:03:10.789246830Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 16 00:03:10.789285 containerd[1964]: time="2025-07-16T00:03:10.789257154Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 16 00:03:10.789285 containerd[1964]: time="2025-07-16T00:03:10.789273008Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 16 00:03:10.789336 containerd[1964]: time="2025-07-16T00:03:10.789314248Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 16 00:03:10.789336 containerd[1964]: time="2025-07-16T00:03:10.789321557Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 16 00:03:10.789529 containerd[1964]: time="2025-07-16T00:03:10.789517945Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 16 00:03:10.789529 containerd[1964]: time="2025-07-16T00:03:10.789526710Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 16 00:03:10.789582 containerd[1964]: time="2025-07-16T00:03:10.789532684Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 16 00:03:10.789582 containerd[1964]: time="2025-07-16T00:03:10.789537061Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 16 00:03:10.789638 containerd[1964]: time="2025-07-16T00:03:10.789591227Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 16 00:03:10.789725 containerd[1964]: time="2025-07-16T00:03:10.789713834Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 16 00:03:10.789752 containerd[1964]: time="2025-07-16T00:03:10.789741276Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 16 00:03:10.789783 containerd[1964]: time="2025-07-16T00:03:10.789752898Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 16 00:03:10.789783 containerd[1964]: time="2025-07-16T00:03:10.789775744Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 16 00:03:10.789952 containerd[1964]: time="2025-07-16T00:03:10.789941123Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 16 00:03:10.789989 containerd[1964]: time="2025-07-16T00:03:10.789974290Z" level=info msg="metadata content store policy set" policy=shared Jul 16 00:03:10.802320 containerd[1964]: time="2025-07-16T00:03:10.802306571Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 16 00:03:10.802348 containerd[1964]: time="2025-07-16T00:03:10.802332817Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 16 00:03:10.802365 containerd[1964]: time="2025-07-16T00:03:10.802346559Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 16 00:03:10.802365 containerd[1964]: time="2025-07-16T00:03:10.802356180Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 16 00:03:10.802390 containerd[1964]: time="2025-07-16T00:03:10.802366320Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 16 00:03:10.802390 containerd[1964]: time="2025-07-16T00:03:10.802372476Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 16 00:03:10.802390 containerd[1964]: time="2025-07-16T00:03:10.802381753Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 16 00:03:10.802436 containerd[1964]: time="2025-07-16T00:03:10.802389370Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 16 00:03:10.802436 containerd[1964]: time="2025-07-16T00:03:10.802400867Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 16 00:03:10.802436 containerd[1964]: time="2025-07-16T00:03:10.802407864Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 16 00:03:10.802436 containerd[1964]: time="2025-07-16T00:03:10.802413753Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 16 00:03:10.802436 containerd[1964]: time="2025-07-16T00:03:10.802423904Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 16 00:03:10.802498 containerd[1964]: time="2025-07-16T00:03:10.802481897Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 16 00:03:10.802513 containerd[1964]: time="2025-07-16T00:03:10.802497142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 16 00:03:10.802513 containerd[1964]: time="2025-07-16T00:03:10.802510719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 16 00:03:10.802538 containerd[1964]: time="2025-07-16T00:03:10.802517796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 16 00:03:10.802538 containerd[1964]: time="2025-07-16T00:03:10.802525902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 16 00:03:10.802538 containerd[1964]: time="2025-07-16T00:03:10.802532775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 16 00:03:10.802578 containerd[1964]: time="2025-07-16T00:03:10.802538851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 16 00:03:10.802578 containerd[1964]: time="2025-07-16T00:03:10.802544428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 16 00:03:10.802578 containerd[1964]: time="2025-07-16T00:03:10.802552189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 16 00:03:10.802578 containerd[1964]: time="2025-07-16T00:03:10.802557834Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 16 00:03:10.802578 containerd[1964]: time="2025-07-16T00:03:10.802562960Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 16 00:03:10.802645 containerd[1964]: time="2025-07-16T00:03:10.802605104Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 16 00:03:10.802645 containerd[1964]: time="2025-07-16T00:03:10.802613551Z" level=info msg="Start snapshots syncer" Jul 16 00:03:10.802645 containerd[1964]: time="2025-07-16T00:03:10.802624651Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 16 00:03:10.802800 containerd[1964]: time="2025-07-16T00:03:10.802778966Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 16 00:03:10.802860 containerd[1964]: time="2025-07-16T00:03:10.802813972Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 16 00:03:10.803221 containerd[1964]: time="2025-07-16T00:03:10.803210816Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 16 00:03:10.803270 containerd[1964]: time="2025-07-16T00:03:10.803261975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 16 00:03:10.803288 containerd[1964]: time="2025-07-16T00:03:10.803274987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 16 00:03:10.803302 containerd[1964]: time="2025-07-16T00:03:10.803284644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 16 00:03:10.803316 containerd[1964]: time="2025-07-16T00:03:10.803307593Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 16 00:03:10.803330 containerd[1964]: time="2025-07-16T00:03:10.803318847Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 16 00:03:10.803330 containerd[1964]: time="2025-07-16T00:03:10.803325649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 16 00:03:10.803360 containerd[1964]: time="2025-07-16T00:03:10.803332159Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 16 00:03:10.803360 containerd[1964]: time="2025-07-16T00:03:10.803345022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 16 00:03:10.803360 containerd[1964]: time="2025-07-16T00:03:10.803352552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 16 00:03:10.803360 containerd[1964]: time="2025-07-16T00:03:10.803358652Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 16 00:03:10.803688 containerd[1964]: time="2025-07-16T00:03:10.803678862Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 16 00:03:10.803706 containerd[1964]: time="2025-07-16T00:03:10.803694153Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 16 00:03:10.803706 containerd[1964]: time="2025-07-16T00:03:10.803699920Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 16 00:03:10.803737 containerd[1964]: time="2025-07-16T00:03:10.803705446Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 16 00:03:10.803737 containerd[1964]: time="2025-07-16T00:03:10.803709877Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 16 00:03:10.803737 containerd[1964]: time="2025-07-16T00:03:10.803718102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 16 00:03:10.803737 containerd[1964]: time="2025-07-16T00:03:10.803723929Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 16 00:03:10.803737 containerd[1964]: time="2025-07-16T00:03:10.803733609Z" level=info msg="runtime interface created" Jul 16 00:03:10.803737 containerd[1964]: time="2025-07-16T00:03:10.803736593Z" level=info msg="created NRI interface" Jul 16 00:03:10.803823 containerd[1964]: time="2025-07-16T00:03:10.803740943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 16 00:03:10.803823 containerd[1964]: time="2025-07-16T00:03:10.803746732Z" level=info msg="Connect containerd service" Jul 16 00:03:10.803823 containerd[1964]: time="2025-07-16T00:03:10.803760814Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 16 00:03:10.804113 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 16 00:03:10.805499 containerd[1964]: time="2025-07-16T00:03:10.805460849Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 16 00:03:10.814693 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 16 00:03:10.824372 locksmithd[2015]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 16 00:03:10.851600 coreos-metadata[2026]: Jul 16 00:03:10.851 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Jul 16 00:03:10.911140 tar[1962]: linux-amd64/LICENSE Jul 16 00:03:10.911218 tar[1962]: linux-amd64/README.md Jul 16 00:03:10.915307 containerd[1964]: time="2025-07-16T00:03:10.915283450Z" level=info msg="Start subscribing containerd event" Jul 16 00:03:10.915354 containerd[1964]: time="2025-07-16T00:03:10.915314569Z" level=info msg="Start recovering state" Jul 16 00:03:10.915381 containerd[1964]: time="2025-07-16T00:03:10.915376560Z" level=info msg="Start event monitor" Jul 16 00:03:10.915400 containerd[1964]: time="2025-07-16T00:03:10.915385399Z" level=info msg="Start cni network conf syncer for default" Jul 16 00:03:10.915400 containerd[1964]: time="2025-07-16T00:03:10.915389619Z" level=info msg="Start streaming server" Jul 16 00:03:10.915400 containerd[1964]: time="2025-07-16T00:03:10.915394259Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 16 00:03:10.915466 containerd[1964]: time="2025-07-16T00:03:10.915317641Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 16 00:03:10.915466 containerd[1964]: time="2025-07-16T00:03:10.915455755Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 16 00:03:10.915512 containerd[1964]: time="2025-07-16T00:03:10.915403793Z" level=info msg="runtime interface starting up..." Jul 16 00:03:10.915512 containerd[1964]: time="2025-07-16T00:03:10.915477065Z" level=info msg="starting plugins..." Jul 16 00:03:10.915512 containerd[1964]: time="2025-07-16T00:03:10.915487752Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 16 00:03:10.915587 containerd[1964]: time="2025-07-16T00:03:10.915560813Z" level=info msg="containerd successfully booted in 0.132311s" Jul 16 00:03:10.915600 systemd[1]: Started containerd.service - containerd container runtime. Jul 16 00:03:10.926336 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 16 00:03:11.158770 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Jul 16 00:03:11.189458 extend-filesystems[1933]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 16 00:03:11.189458 extend-filesystems[1933]: old_desc_blocks = 1, new_desc_blocks = 56 Jul 16 00:03:11.189458 extend-filesystems[1933]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Jul 16 00:03:11.226864 extend-filesystems[1921]: Resized filesystem in /dev/sda9 Jul 16 00:03:11.189894 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 16 00:03:11.190021 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 16 00:03:11.341188 coreos-metadata[1914]: Jul 16 00:03:11.341 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Jul 16 00:03:12.020864 systemd-networkd[1878]: bond0: Gained IPv6LL Jul 16 00:03:12.021234 systemd-timesyncd[1880]: Network configuration changed, trying to establish connection. Jul 16 00:03:12.212088 systemd-timesyncd[1880]: Network configuration changed, trying to establish connection. Jul 16 00:03:12.212235 systemd-timesyncd[1880]: Network configuration changed, trying to establish connection. Jul 16 00:03:12.213425 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 16 00:03:12.225328 systemd[1]: Reached target network-online.target - Network is Online. Jul 16 00:03:12.235011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:03:12.255149 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 16 00:03:12.274892 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 16 00:03:12.999452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:03:13.009258 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:03:13.283687 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Jul 16 00:03:13.283835 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Jul 16 00:03:13.464253 kubelet[2073]: E0716 00:03:13.464172 2073 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:03:13.465308 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:03:13.465388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:03:13.465577 systemd[1]: kubelet.service: Consumed 593ms CPU time, 268.3M memory peak. Jul 16 00:03:14.235269 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 16 00:03:14.244769 systemd[1]: Started sshd@0-147.75.203.227:22-147.75.109.163:47866.service - OpenSSH per-connection server daemon (147.75.109.163:47866). Jul 16 00:03:14.306559 sshd[2094]: Accepted publickey for core from 147.75.109.163 port 47866 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:03:14.307649 sshd-session[2094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:03:14.315048 systemd-logind[1947]: New session 1 of user core. Jul 16 00:03:14.315862 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 16 00:03:14.325750 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 16 00:03:14.352455 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 16 00:03:14.369417 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 16 00:03:14.405059 (systemd)[2098]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 16 00:03:14.412065 systemd-logind[1947]: New session c1 of user core. Jul 16 00:03:14.552778 systemd[2098]: Queued start job for default target default.target. Jul 16 00:03:14.568739 systemd[2098]: Created slice app.slice - User Application Slice. Jul 16 00:03:14.568802 systemd[2098]: Reached target paths.target - Paths. Jul 16 00:03:14.568833 systemd[2098]: Reached target timers.target - Timers. Jul 16 00:03:14.569585 systemd[2098]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 16 00:03:14.575172 systemd[2098]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 16 00:03:14.575209 systemd[2098]: Reached target sockets.target - Sockets. Jul 16 00:03:14.575237 systemd[2098]: Reached target basic.target - Basic System. Jul 16 00:03:14.575270 systemd[2098]: Reached target default.target - Main User Target. Jul 16 00:03:14.575293 systemd[2098]: Startup finished in 145ms. Jul 16 00:03:14.575324 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 16 00:03:14.584793 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 16 00:03:14.663042 systemd[1]: Started sshd@1-147.75.203.227:22-147.75.109.163:47874.service - OpenSSH per-connection server daemon (147.75.109.163:47874). Jul 16 00:03:14.713731 sshd[2109]: Accepted publickey for core from 147.75.109.163 port 47874 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:03:14.714401 sshd-session[2109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:03:14.717383 systemd-logind[1947]: New session 2 of user core. Jul 16 00:03:14.737149 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 16 00:03:14.799883 sshd[2111]: Connection closed by 147.75.109.163 port 47874 Jul 16 00:03:14.800017 sshd-session[2109]: pam_unix(sshd:session): session closed for user core Jul 16 00:03:14.814095 systemd[1]: sshd@1-147.75.203.227:22-147.75.109.163:47874.service: Deactivated successfully. Jul 16 00:03:14.815008 systemd[1]: session-2.scope: Deactivated successfully. Jul 16 00:03:14.815494 systemd-logind[1947]: Session 2 logged out. Waiting for processes to exit. Jul 16 00:03:14.816846 systemd[1]: Started sshd@2-147.75.203.227:22-147.75.109.163:47882.service - OpenSSH per-connection server daemon (147.75.109.163:47882). Jul 16 00:03:14.827518 systemd-logind[1947]: Removed session 2. Jul 16 00:03:14.867903 sshd[2117]: Accepted publickey for core from 147.75.109.163 port 47882 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:03:14.868829 sshd-session[2117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:03:14.872029 systemd-logind[1947]: New session 3 of user core. Jul 16 00:03:14.887039 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 16 00:03:14.959090 sshd[2119]: Connection closed by 147.75.109.163 port 47882 Jul 16 00:03:14.959760 sshd-session[2117]: pam_unix(sshd:session): session closed for user core Jul 16 00:03:14.966606 systemd[1]: sshd@2-147.75.203.227:22-147.75.109.163:47882.service: Deactivated successfully. Jul 16 00:03:14.970682 systemd[1]: session-3.scope: Deactivated successfully. Jul 16 00:03:14.975371 systemd-logind[1947]: Session 3 logged out. Waiting for processes to exit. Jul 16 00:03:14.978411 systemd-logind[1947]: Removed session 3. Jul 16 00:03:15.796105 login[2013]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 16 00:03:15.799143 systemd-logind[1947]: New session 4 of user core. Jul 16 00:03:15.799964 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 16 00:03:15.801775 login[2012]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 16 00:03:15.804464 systemd-logind[1947]: New session 5 of user core. Jul 16 00:03:15.805151 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 16 00:03:17.061020 systemd-timesyncd[1880]: Network configuration changed, trying to establish connection. Jul 16 00:03:17.146177 coreos-metadata[1914]: Jul 16 00:03:17.146 INFO Fetch successful Jul 16 00:03:17.248158 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 16 00:03:17.249935 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Jul 16 00:03:17.425330 coreos-metadata[2026]: Jul 16 00:03:17.425 INFO Fetch successful Jul 16 00:03:17.509384 unknown[2026]: wrote ssh authorized keys file for user: core Jul 16 00:03:17.539756 update-ssh-keys[2158]: Updated "/home/core/.ssh/authorized_keys" Jul 16 00:03:17.540162 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 16 00:03:17.540933 systemd[1]: Finished sshkeys.service. Jul 16 00:03:18.436688 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Jul 16 00:03:18.438024 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 16 00:03:18.438523 systemd[1]: Startup finished in 4.435s (kernel) + 26.326s (initrd) + 11.892s (userspace) = 42.653s. Jul 16 00:03:23.702658 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 16 00:03:23.706386 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:03:23.970807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:03:23.975343 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:03:24.013511 kubelet[2171]: E0716 00:03:24.013480 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:03:24.015972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:03:24.016069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:03:24.016273 systemd[1]: kubelet.service: Consumed 176ms CPU time, 115.1M memory peak. Jul 16 00:03:24.980436 systemd[1]: Started sshd@3-147.75.203.227:22-147.75.109.163:58306.service - OpenSSH per-connection server daemon (147.75.109.163:58306). Jul 16 00:03:25.020693 sshd[2189]: Accepted publickey for core from 147.75.109.163 port 58306 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:03:25.021329 sshd-session[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:03:25.024225 systemd-logind[1947]: New session 6 of user core. Jul 16 00:03:25.037024 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 16 00:03:25.089030 sshd[2191]: Connection closed by 147.75.109.163 port 58306 Jul 16 00:03:25.089231 sshd-session[2189]: pam_unix(sshd:session): session closed for user core Jul 16 00:03:25.110136 systemd[1]: sshd@3-147.75.203.227:22-147.75.109.163:58306.service: Deactivated successfully. Jul 16 00:03:25.114046 systemd[1]: session-6.scope: Deactivated successfully. Jul 16 00:03:25.116314 systemd-logind[1947]: Session 6 logged out. Waiting for processes to exit. Jul 16 00:03:25.122426 systemd[1]: Started sshd@4-147.75.203.227:22-147.75.109.163:58312.service - OpenSSH per-connection server daemon (147.75.109.163:58312). Jul 16 00:03:25.124284 systemd-logind[1947]: Removed session 6. Jul 16 00:03:25.215783 sshd[2197]: Accepted publickey for core from 147.75.109.163 port 58312 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:03:25.216457 sshd-session[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:03:25.219292 systemd-logind[1947]: New session 7 of user core. Jul 16 00:03:25.230214 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 16 00:03:25.289240 sshd[2199]: Connection closed by 147.75.109.163 port 58312 Jul 16 00:03:25.289863 sshd-session[2197]: pam_unix(sshd:session): session closed for user core Jul 16 00:03:25.315246 systemd[1]: sshd@4-147.75.203.227:22-147.75.109.163:58312.service: Deactivated successfully. Jul 16 00:03:25.319076 systemd[1]: session-7.scope: Deactivated successfully. Jul 16 00:03:25.321311 systemd-logind[1947]: Session 7 logged out. Waiting for processes to exit. Jul 16 00:03:25.327172 systemd[1]: Started sshd@5-147.75.203.227:22-147.75.109.163:58318.service - OpenSSH per-connection server daemon (147.75.109.163:58318). Jul 16 00:03:25.328978 systemd-logind[1947]: Removed session 7. Jul 16 00:03:25.410003 sshd[2205]: Accepted publickey for core from 147.75.109.163 port 58318 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:03:25.410587 sshd-session[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:03:25.413420 systemd-logind[1947]: New session 8 of user core. Jul 16 00:03:25.430440 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 16 00:03:25.490642 sshd[2208]: Connection closed by 147.75.109.163 port 58318 Jul 16 00:03:25.490807 sshd-session[2205]: pam_unix(sshd:session): session closed for user core Jul 16 00:03:25.500827 systemd[1]: sshd@5-147.75.203.227:22-147.75.109.163:58318.service: Deactivated successfully. Jul 16 00:03:25.501591 systemd[1]: session-8.scope: Deactivated successfully. Jul 16 00:03:25.502101 systemd-logind[1947]: Session 8 logged out. Waiting for processes to exit. Jul 16 00:03:25.503261 systemd[1]: Started sshd@6-147.75.203.227:22-147.75.109.163:58320.service - OpenSSH per-connection server daemon (147.75.109.163:58320). Jul 16 00:03:25.503826 systemd-logind[1947]: Removed session 8. Jul 16 00:03:25.546708 sshd[2214]: Accepted publickey for core from 147.75.109.163 port 58320 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:03:25.547515 sshd-session[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:03:25.550895 systemd-logind[1947]: New session 9 of user core. Jul 16 00:03:25.563032 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 16 00:03:25.624147 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 16 00:03:25.624295 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:03:25.635459 sudo[2218]: pam_unix(sudo:session): session closed for user root Jul 16 00:03:25.636256 sshd[2217]: Connection closed by 147.75.109.163 port 58320 Jul 16 00:03:25.636430 sshd-session[2214]: pam_unix(sshd:session): session closed for user core Jul 16 00:03:25.648205 systemd[1]: sshd@6-147.75.203.227:22-147.75.109.163:58320.service: Deactivated successfully. Jul 16 00:03:25.649152 systemd[1]: session-9.scope: Deactivated successfully. Jul 16 00:03:25.649719 systemd-logind[1947]: Session 9 logged out. Waiting for processes to exit. Jul 16 00:03:25.651183 systemd[1]: Started sshd@7-147.75.203.227:22-147.75.109.163:58334.service - OpenSSH per-connection server daemon (147.75.109.163:58334). Jul 16 00:03:25.651920 systemd-logind[1947]: Removed session 9. Jul 16 00:03:25.707344 sshd[2224]: Accepted publickey for core from 147.75.109.163 port 58334 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:03:25.708380 sshd-session[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:03:25.712470 systemd-logind[1947]: New session 10 of user core. Jul 16 00:03:25.727011 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 16 00:03:25.780198 sudo[2228]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 16 00:03:25.780348 sudo[2228]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:03:25.783071 sudo[2228]: pam_unix(sudo:session): session closed for user root Jul 16 00:03:25.785689 sudo[2227]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 16 00:03:25.785838 sudo[2227]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:03:25.791592 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 16 00:03:25.839127 augenrules[2250]: No rules Jul 16 00:03:25.839473 systemd[1]: audit-rules.service: Deactivated successfully. Jul 16 00:03:25.839588 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 16 00:03:25.840174 sudo[2227]: pam_unix(sudo:session): session closed for user root Jul 16 00:03:25.840915 sshd[2226]: Connection closed by 147.75.109.163 port 58334 Jul 16 00:03:25.841069 sshd-session[2224]: pam_unix(sshd:session): session closed for user core Jul 16 00:03:25.854014 systemd[1]: sshd@7-147.75.203.227:22-147.75.109.163:58334.service: Deactivated successfully. Jul 16 00:03:25.854838 systemd[1]: session-10.scope: Deactivated successfully. Jul 16 00:03:25.855381 systemd-logind[1947]: Session 10 logged out. Waiting for processes to exit. Jul 16 00:03:25.856522 systemd[1]: Started sshd@8-147.75.203.227:22-147.75.109.163:58336.service - OpenSSH per-connection server daemon (147.75.109.163:58336). Jul 16 00:03:25.857209 systemd-logind[1947]: Removed session 10. Jul 16 00:03:25.888731 sshd[2259]: Accepted publickey for core from 147.75.109.163 port 58336 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:03:25.889344 sshd-session[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:03:25.892027 systemd-logind[1947]: New session 11 of user core. Jul 16 00:03:25.904041 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 16 00:03:25.954466 sudo[2262]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 16 00:03:25.954622 sudo[2262]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 16 00:03:26.247491 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 16 00:03:26.258164 (dockerd)[2289]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 16 00:03:26.465603 dockerd[2289]: time="2025-07-16T00:03:26.465505844Z" level=info msg="Starting up" Jul 16 00:03:26.466523 dockerd[2289]: time="2025-07-16T00:03:26.466453598Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 16 00:03:26.493155 dockerd[2289]: time="2025-07-16T00:03:26.493107902Z" level=info msg="Loading containers: start." Jul 16 00:03:26.505841 kernel: Initializing XFRM netlink socket Jul 16 00:03:26.642986 systemd-timesyncd[1880]: Network configuration changed, trying to establish connection. Jul 16 00:03:26.663775 systemd-networkd[1878]: docker0: Link UP Jul 16 00:03:26.665068 dockerd[2289]: time="2025-07-16T00:03:26.665020969Z" level=info msg="Loading containers: done." Jul 16 00:03:26.671511 dockerd[2289]: time="2025-07-16T00:03:26.671467251Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 16 00:03:26.671511 dockerd[2289]: time="2025-07-16T00:03:26.671506166Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 16 00:03:26.671593 dockerd[2289]: time="2025-07-16T00:03:26.671558275Z" level=info msg="Initializing buildkit" Jul 16 00:03:26.681714 dockerd[2289]: time="2025-07-16T00:03:26.681673834Z" level=info msg="Completed buildkit initialization" Jul 16 00:03:26.685021 dockerd[2289]: time="2025-07-16T00:03:26.684965221Z" level=info msg="Daemon has completed initialization" Jul 16 00:03:26.685021 dockerd[2289]: time="2025-07-16T00:03:26.684990606Z" level=info msg="API listen on /run/docker.sock" Jul 16 00:03:26.685106 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 16 00:03:27.099105 systemd-timesyncd[1880]: Contacted time server [2607:9000:7000:23:216:3cff:fe25:38d7]:123 (2.flatcar.pool.ntp.org). Jul 16 00:03:27.099131 systemd-timesyncd[1880]: Initial clock synchronization to Wed 2025-07-16 00:03:26.856206 UTC. Jul 16 00:03:27.446297 containerd[1964]: time="2025-07-16T00:03:27.446056981Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Jul 16 00:03:28.261947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1986026066.mount: Deactivated successfully. Jul 16 00:03:28.940159 containerd[1964]: time="2025-07-16T00:03:28.940132267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:28.940380 containerd[1964]: time="2025-07-16T00:03:28.940312923Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Jul 16 00:03:28.940651 containerd[1964]: time="2025-07-16T00:03:28.940639730Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:28.941896 containerd[1964]: time="2025-07-16T00:03:28.941883974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:28.942396 containerd[1964]: time="2025-07-16T00:03:28.942384337Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 1.496240459s" Jul 16 00:03:28.942414 containerd[1964]: time="2025-07-16T00:03:28.942403082Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Jul 16 00:03:28.942723 containerd[1964]: time="2025-07-16T00:03:28.942712242Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Jul 16 00:03:29.934332 containerd[1964]: time="2025-07-16T00:03:29.934309598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:29.934527 containerd[1964]: time="2025-07-16T00:03:29.934513076Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Jul 16 00:03:29.934950 containerd[1964]: time="2025-07-16T00:03:29.934915327Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:29.936059 containerd[1964]: time="2025-07-16T00:03:29.936024492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:29.936648 containerd[1964]: time="2025-07-16T00:03:29.936604564Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 993.874246ms" Jul 16 00:03:29.936648 containerd[1964]: time="2025-07-16T00:03:29.936625537Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Jul 16 00:03:29.936944 containerd[1964]: time="2025-07-16T00:03:29.936896520Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Jul 16 00:03:30.795529 containerd[1964]: time="2025-07-16T00:03:30.795500066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:30.795746 containerd[1964]: time="2025-07-16T00:03:30.795703054Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Jul 16 00:03:30.796157 containerd[1964]: time="2025-07-16T00:03:30.796116515Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:30.797410 containerd[1964]: time="2025-07-16T00:03:30.797371157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:30.797901 containerd[1964]: time="2025-07-16T00:03:30.797852729Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 860.942159ms" Jul 16 00:03:30.797901 containerd[1964]: time="2025-07-16T00:03:30.797867757Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Jul 16 00:03:30.798173 containerd[1964]: time="2025-07-16T00:03:30.798125885Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Jul 16 00:03:31.506802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount230935213.mount: Deactivated successfully. Jul 16 00:03:31.696994 containerd[1964]: time="2025-07-16T00:03:31.696941791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:31.697182 containerd[1964]: time="2025-07-16T00:03:31.697137128Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Jul 16 00:03:31.697482 containerd[1964]: time="2025-07-16T00:03:31.697442472Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:31.698271 containerd[1964]: time="2025-07-16T00:03:31.698256251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:31.698530 containerd[1964]: time="2025-07-16T00:03:31.698490640Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 900.346805ms" Jul 16 00:03:31.698530 containerd[1964]: time="2025-07-16T00:03:31.698507693Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Jul 16 00:03:31.698795 containerd[1964]: time="2025-07-16T00:03:31.698786072Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 16 00:03:32.168585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount354303297.mount: Deactivated successfully. Jul 16 00:03:32.753864 containerd[1964]: time="2025-07-16T00:03:32.753835366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:32.754125 containerd[1964]: time="2025-07-16T00:03:32.754024445Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 16 00:03:32.754436 containerd[1964]: time="2025-07-16T00:03:32.754423133Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:32.755795 containerd[1964]: time="2025-07-16T00:03:32.755781167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:32.756334 containerd[1964]: time="2025-07-16T00:03:32.756322486Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.057522007s" Jul 16 00:03:32.756356 containerd[1964]: time="2025-07-16T00:03:32.756338499Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 16 00:03:32.756596 containerd[1964]: time="2025-07-16T00:03:32.756587132Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 16 00:03:33.204896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3112059367.mount: Deactivated successfully. Jul 16 00:03:33.205978 containerd[1964]: time="2025-07-16T00:03:33.205926256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:03:33.206126 containerd[1964]: time="2025-07-16T00:03:33.206113902Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 16 00:03:33.206497 containerd[1964]: time="2025-07-16T00:03:33.206486236Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:03:33.207403 containerd[1964]: time="2025-07-16T00:03:33.207389533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 16 00:03:33.207803 containerd[1964]: time="2025-07-16T00:03:33.207789948Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 451.163374ms" Jul 16 00:03:33.207850 containerd[1964]: time="2025-07-16T00:03:33.207818826Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 16 00:03:33.208152 containerd[1964]: time="2025-07-16T00:03:33.208141461Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 16 00:03:33.767736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2548768453.mount: Deactivated successfully. Jul 16 00:03:34.201344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 16 00:03:34.203353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:03:34.567441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:03:34.569563 (kubelet)[2706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 16 00:03:34.589298 kubelet[2706]: E0716 00:03:34.589255 2706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 16 00:03:34.590820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 16 00:03:34.590937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 16 00:03:34.591161 systemd[1]: kubelet.service: Consumed 121ms CPU time, 114.2M memory peak. Jul 16 00:03:34.996280 containerd[1964]: time="2025-07-16T00:03:34.996248992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:34.996541 containerd[1964]: time="2025-07-16T00:03:34.996422806Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 16 00:03:34.996880 containerd[1964]: time="2025-07-16T00:03:34.996866533Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:34.998311 containerd[1964]: time="2025-07-16T00:03:34.998174788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:34.999999 containerd[1964]: time="2025-07-16T00:03:34.999983030Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 1.791824741s" Jul 16 00:03:35.000039 containerd[1964]: time="2025-07-16T00:03:35.000000836Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 16 00:03:36.536493 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:03:36.536602 systemd[1]: kubelet.service: Consumed 121ms CPU time, 114.2M memory peak. Jul 16 00:03:36.538025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:03:36.550355 systemd[1]: Reload requested from client PID 2774 ('systemctl') (unit session-11.scope)... Jul 16 00:03:36.550363 systemd[1]: Reloading... Jul 16 00:03:36.596849 zram_generator::config[2820]: No configuration found. Jul 16 00:03:36.652055 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:03:36.739594 systemd[1]: Reloading finished in 189 ms. Jul 16 00:03:36.765123 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 16 00:03:36.765164 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 16 00:03:36.765288 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:03:36.765311 systemd[1]: kubelet.service: Consumed 51ms CPU time, 92.7M memory peak. Jul 16 00:03:36.766550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:03:37.039348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:03:37.041370 (kubelet)[2884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 16 00:03:37.062625 kubelet[2884]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:03:37.062625 kubelet[2884]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 16 00:03:37.062625 kubelet[2884]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:03:37.062957 kubelet[2884]: I0716 00:03:37.062669 2884 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 16 00:03:37.313491 kubelet[2884]: I0716 00:03:37.313421 2884 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 16 00:03:37.313491 kubelet[2884]: I0716 00:03:37.313433 2884 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 16 00:03:37.313578 kubelet[2884]: I0716 00:03:37.313569 2884 server.go:934] "Client rotation is on, will bootstrap in background" Jul 16 00:03:37.335829 kubelet[2884]: E0716 00:03:37.335760 2884 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://147.75.203.227:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 147.75.203.227:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:03:37.338443 kubelet[2884]: I0716 00:03:37.338405 2884 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 16 00:03:37.344561 kubelet[2884]: I0716 00:03:37.344496 2884 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 16 00:03:37.353615 kubelet[2884]: I0716 00:03:37.353602 2884 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 16 00:03:37.354192 kubelet[2884]: I0716 00:03:37.354151 2884 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 16 00:03:37.354233 kubelet[2884]: I0716 00:03:37.354218 2884 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 16 00:03:37.354356 kubelet[2884]: I0716 00:03:37.354233 2884 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-n-fdc39dabbd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 16 00:03:37.354356 kubelet[2884]: I0716 00:03:37.354332 2884 topology_manager.go:138] "Creating topology manager with none policy" Jul 16 00:03:37.354356 kubelet[2884]: I0716 00:03:37.354338 2884 container_manager_linux.go:300] "Creating device plugin manager" Jul 16 00:03:37.354480 kubelet[2884]: I0716 00:03:37.354411 2884 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:03:37.356775 kubelet[2884]: I0716 00:03:37.356738 2884 kubelet.go:408] "Attempting to sync node with API server" Jul 16 00:03:37.356775 kubelet[2884]: I0716 00:03:37.356750 2884 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 16 00:03:37.356775 kubelet[2884]: I0716 00:03:37.356777 2884 kubelet.go:314] "Adding apiserver pod source" Jul 16 00:03:37.356857 kubelet[2884]: I0716 00:03:37.356810 2884 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 16 00:03:37.358997 kubelet[2884]: I0716 00:03:37.358968 2884 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 16 00:03:37.359437 kubelet[2884]: I0716 00:03:37.359399 2884 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 16 00:03:37.359839 kubelet[2884]: W0716 00:03:37.359753 2884 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://147.75.203.227:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 147.75.203.227:6443: connect: connection refused Jul 16 00:03:37.359872 kubelet[2884]: E0716 00:03:37.359845 2884 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://147.75.203.227:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 147.75.203.227:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:03:37.359978 kubelet[2884]: W0716 00:03:37.359956 2884 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://147.75.203.227:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-n-fdc39dabbd&limit=500&resourceVersion=0": dial tcp 147.75.203.227:6443: connect: connection refused Jul 16 00:03:37.360009 kubelet[2884]: E0716 00:03:37.359984 2884 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://147.75.203.227:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-n-fdc39dabbd&limit=500&resourceVersion=0\": dial tcp 147.75.203.227:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:03:37.360080 kubelet[2884]: W0716 00:03:37.360039 2884 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 16 00:03:37.361436 kubelet[2884]: I0716 00:03:37.361394 2884 server.go:1274] "Started kubelet" Jul 16 00:03:37.361485 kubelet[2884]: I0716 00:03:37.361459 2884 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 16 00:03:37.361563 kubelet[2884]: I0716 00:03:37.361457 2884 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 16 00:03:37.361722 kubelet[2884]: I0716 00:03:37.361712 2884 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 16 00:03:37.362215 kubelet[2884]: I0716 00:03:37.362207 2884 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 16 00:03:37.362255 kubelet[2884]: I0716 00:03:37.362214 2884 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 16 00:03:37.365081 kubelet[2884]: E0716 00:03:37.364992 2884 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-fdc39dabbd\" not found" Jul 16 00:03:37.365122 kubelet[2884]: I0716 00:03:37.365022 2884 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 16 00:03:37.365216 kubelet[2884]: I0716 00:03:37.365197 2884 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 16 00:03:37.365480 kubelet[2884]: I0716 00:03:37.365366 2884 reconciler.go:26] "Reconciler: start to sync state" Jul 16 00:03:37.365551 kubelet[2884]: E0716 00:03:37.365534 2884 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 16 00:03:37.365806 kubelet[2884]: E0716 00:03:37.365785 2884 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-fdc39dabbd?timeout=10s\": dial tcp 147.75.203.227:6443: connect: connection refused" interval="200ms" Jul 16 00:03:37.365852 kubelet[2884]: W0716 00:03:37.365802 2884 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://147.75.203.227:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 147.75.203.227:6443: connect: connection refused Jul 16 00:03:37.365881 kubelet[2884]: E0716 00:03:37.365856 2884 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://147.75.203.227:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 147.75.203.227:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:03:37.365913 kubelet[2884]: I0716 00:03:37.365896 2884 factory.go:221] Registration of the systemd container factory successfully Jul 16 00:03:37.366052 kubelet[2884]: I0716 00:03:37.365984 2884 server.go:449] "Adding debug handlers to kubelet server" Jul 16 00:03:37.366052 kubelet[2884]: I0716 00:03:37.366016 2884 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 16 00:03:37.367985 kubelet[2884]: I0716 00:03:37.367975 2884 factory.go:221] Registration of the containerd container factory successfully Jul 16 00:03:37.368473 kubelet[2884]: E0716 00:03:37.367626 2884 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://147.75.203.227:6443/api/v1/namespaces/default/events\": dial tcp 147.75.203.227:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.1-n-fdc39dabbd.185292709c927dc6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-n-fdc39dabbd,UID:ci-4372.0.1-n-fdc39dabbd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-n-fdc39dabbd,},FirstTimestamp:2025-07-16 00:03:37.361382854 +0000 UTC m=+0.318114794,LastTimestamp:2025-07-16 00:03:37.361382854 +0000 UTC m=+0.318114794,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-n-fdc39dabbd,}" Jul 16 00:03:37.374523 kubelet[2884]: I0716 00:03:37.374513 2884 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 16 00:03:37.374523 kubelet[2884]: I0716 00:03:37.374519 2884 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 16 00:03:37.374598 kubelet[2884]: I0716 00:03:37.374528 2884 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:03:37.374598 kubelet[2884]: I0716 00:03:37.374560 2884 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 16 00:03:37.375108 kubelet[2884]: I0716 00:03:37.375098 2884 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 16 00:03:37.375220 kubelet[2884]: I0716 00:03:37.375111 2884 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 16 00:03:37.375220 kubelet[2884]: I0716 00:03:37.375120 2884 kubelet.go:2321] "Starting kubelet main sync loop" Jul 16 00:03:37.375220 kubelet[2884]: E0716 00:03:37.375139 2884 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 16 00:03:37.375384 kubelet[2884]: W0716 00:03:37.375368 2884 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://147.75.203.227:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 147.75.203.227:6443: connect: connection refused Jul 16 00:03:37.375417 kubelet[2884]: E0716 00:03:37.375395 2884 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://147.75.203.227:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 147.75.203.227:6443: connect: connection refused" logger="UnhandledError" Jul 16 00:03:37.375437 kubelet[2884]: I0716 00:03:37.375423 2884 policy_none.go:49] "None policy: Start" Jul 16 00:03:37.375642 kubelet[2884]: I0716 00:03:37.375634 2884 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 16 00:03:37.375664 kubelet[2884]: I0716 00:03:37.375645 2884 state_mem.go:35] "Initializing new in-memory state store" Jul 16 00:03:37.378087 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 16 00:03:37.392564 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 16 00:03:37.395072 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 16 00:03:37.404466 kubelet[2884]: I0716 00:03:37.404427 2884 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 16 00:03:37.404608 kubelet[2884]: I0716 00:03:37.404568 2884 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 16 00:03:37.404608 kubelet[2884]: I0716 00:03:37.404577 2884 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 16 00:03:37.404717 kubelet[2884]: I0716 00:03:37.404710 2884 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 16 00:03:37.405358 kubelet[2884]: E0716 00:03:37.405318 2884 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.1-n-fdc39dabbd\" not found" Jul 16 00:03:37.500752 systemd[1]: Created slice kubepods-burstable-podc5c1ec2c8105f1924b0dfdbd6048942d.slice - libcontainer container kubepods-burstable-podc5c1ec2c8105f1924b0dfdbd6048942d.slice. Jul 16 00:03:37.507635 kubelet[2884]: I0716 00:03:37.507524 2884 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:37.508300 kubelet[2884]: E0716 00:03:37.508212 2884 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.203.227:6443/api/v1/nodes\": dial tcp 147.75.203.227:6443: connect: connection refused" node="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:37.521472 systemd[1]: Created slice kubepods-burstable-poddf177b9bebe4b3f0b6bdad0b1b5a72dc.slice - libcontainer container kubepods-burstable-poddf177b9bebe4b3f0b6bdad0b1b5a72dc.slice. Jul 16 00:03:37.549087 systemd[1]: Created slice kubepods-burstable-pod8c53ca48d5f40ac36502962cdb1b38cd.slice - libcontainer container kubepods-burstable-pod8c53ca48d5f40ac36502962cdb1b38cd.slice. Jul 16 00:03:37.567338 kubelet[2884]: E0716 00:03:37.567136 2884 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://147.75.203.227:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-n-fdc39dabbd?timeout=10s\": dial tcp 147.75.203.227:6443: connect: connection refused" interval="400ms" Jul 16 00:03:37.666838 kubelet[2884]: I0716 00:03:37.666698 2884 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df177b9bebe4b3f0b6bdad0b1b5a72dc-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-n-fdc39dabbd\" (UID: \"df177b9bebe4b3f0b6bdad0b1b5a72dc\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:37.666838 kubelet[2884]: I0716 00:03:37.666841 2884 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df177b9bebe4b3f0b6bdad0b1b5a72dc-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-fdc39dabbd\" (UID: \"df177b9bebe4b3f0b6bdad0b1b5a72dc\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:37.667207 kubelet[2884]: I0716 00:03:37.666902 2884 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5c1ec2c8105f1924b0dfdbd6048942d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-n-fdc39dabbd\" (UID: \"c5c1ec2c8105f1924b0dfdbd6048942d\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:37.667207 kubelet[2884]: I0716 00:03:37.666998 2884 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c53ca48d5f40ac36502962cdb1b38cd-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-n-fdc39dabbd\" (UID: \"8c53ca48d5f40ac36502962cdb1b38cd\") " pod="kube-system/kube-scheduler-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:37.667207 kubelet[2884]: I0716 00:03:37.667054 2884 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5c1ec2c8105f1924b0dfdbd6048942d-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-n-fdc39dabbd\" (UID: \"c5c1ec2c8105f1924b0dfdbd6048942d\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:37.667207 kubelet[2884]: I0716 00:03:37.667104 2884 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5c1ec2c8105f1924b0dfdbd6048942d-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-n-fdc39dabbd\" (UID: \"c5c1ec2c8105f1924b0dfdbd6048942d\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:37.667207 kubelet[2884]: I0716 00:03:37.667147 2884 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df177b9bebe4b3f0b6bdad0b1b5a72dc-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-fdc39dabbd\" (UID: \"df177b9bebe4b3f0b6bdad0b1b5a72dc\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:37.667635 kubelet[2884]: I0716 00:03:37.667196 2884 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df177b9bebe4b3f0b6bdad0b1b5a72dc-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-n-fdc39dabbd\" (UID: \"df177b9bebe4b3f0b6bdad0b1b5a72dc\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:37.667635 kubelet[2884]: I0716 00:03:37.667294 2884 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df177b9bebe4b3f0b6bdad0b1b5a72dc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-n-fdc39dabbd\" (UID: \"df177b9bebe4b3f0b6bdad0b1b5a72dc\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:37.712803 kubelet[2884]: I0716 00:03:37.712683 2884 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:37.713556 kubelet[2884]: E0716 00:03:37.713449 2884 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://147.75.203.227:6443/api/v1/nodes\": dial tcp 147.75.203.227:6443: connect: connection refused" node="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:37.818160 containerd[1964]: time="2025-07-16T00:03:37.817921267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-n-fdc39dabbd,Uid:c5c1ec2c8105f1924b0dfdbd6048942d,Namespace:kube-system,Attempt:0,}" Jul 16 00:03:37.827603 containerd[1964]: time="2025-07-16T00:03:37.827564522Z" level=info msg="connecting to shim f9ca418393a366aac18caf21caee0aaac0414e34a4f589f3eb402af9d96e9cf9" address="unix:///run/containerd/s/1839bb1d627749a239117c926f4390f4b01d9733e77e5eb8e4ef8cfca46d2694" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:03:37.843736 containerd[1964]: time="2025-07-16T00:03:37.843610600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-n-fdc39dabbd,Uid:df177b9bebe4b3f0b6bdad0b1b5a72dc,Namespace:kube-system,Attempt:0,}" Jul 16 00:03:37.847913 systemd[1]: Started cri-containerd-f9ca418393a366aac18caf21caee0aaac0414e34a4f589f3eb402af9d96e9cf9.scope - libcontainer container f9ca418393a366aac18caf21caee0aaac0414e34a4f589f3eb402af9d96e9cf9. Jul 16 00:03:37.852336 containerd[1964]: time="2025-07-16T00:03:37.852304736Z" level=info msg="connecting to shim 485dc4c250bb9e5fd9b2e178e6de28f4622f43d895ffaf2e5d0fcd04726dbc07" address="unix:///run/containerd/s/8491aced459c0bec1bef27c3e4f6ea25aac445aa45f602829c64ec2805b3edcf" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:03:37.855035 containerd[1964]: time="2025-07-16T00:03:37.855012780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-n-fdc39dabbd,Uid:8c53ca48d5f40ac36502962cdb1b38cd,Namespace:kube-system,Attempt:0,}" Jul 16 00:03:37.862969 containerd[1964]: time="2025-07-16T00:03:37.862939222Z" level=info msg="connecting to shim 9a4b8202e475a5b3aa14ca8f029e7033450ce427ab455166ac9f901e99795007" address="unix:///run/containerd/s/1ea61cedbd625ef736dec78c24c5739e679ce98f15a5a641c6ea82c96213cca3" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:03:37.863532 systemd[1]: Started cri-containerd-485dc4c250bb9e5fd9b2e178e6de28f4622f43d895ffaf2e5d0fcd04726dbc07.scope - libcontainer container 485dc4c250bb9e5fd9b2e178e6de28f4622f43d895ffaf2e5d0fcd04726dbc07. Jul 16 00:03:37.870970 systemd[1]: Started cri-containerd-9a4b8202e475a5b3aa14ca8f029e7033450ce427ab455166ac9f901e99795007.scope - libcontainer container 9a4b8202e475a5b3aa14ca8f029e7033450ce427ab455166ac9f901e99795007. Jul 16 00:03:37.874405 containerd[1964]: time="2025-07-16T00:03:37.874382397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-n-fdc39dabbd,Uid:c5c1ec2c8105f1924b0dfdbd6048942d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9ca418393a366aac18caf21caee0aaac0414e34a4f589f3eb402af9d96e9cf9\"" Jul 16 00:03:37.876099 containerd[1964]: time="2025-07-16T00:03:37.876082552Z" level=info msg="CreateContainer within sandbox \"f9ca418393a366aac18caf21caee0aaac0414e34a4f589f3eb402af9d96e9cf9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 16 00:03:37.879002 containerd[1964]: time="2025-07-16T00:03:37.878990099Z" level=info msg="Container 3b387e5707ade3f6ec8f7ec78358739bbdd6523878579a791628ae717564b20e: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:03:37.881581 containerd[1964]: time="2025-07-16T00:03:37.881570042Z" level=info msg="CreateContainer within sandbox \"f9ca418393a366aac18caf21caee0aaac0414e34a4f589f3eb402af9d96e9cf9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3b387e5707ade3f6ec8f7ec78358739bbdd6523878579a791628ae717564b20e\"" Jul 16 00:03:37.881951 containerd[1964]: time="2025-07-16T00:03:37.881925187Z" level=info msg="StartContainer for \"3b387e5707ade3f6ec8f7ec78358739bbdd6523878579a791628ae717564b20e\"" Jul 16 00:03:37.882770 containerd[1964]: time="2025-07-16T00:03:37.882731045Z" level=info msg="connecting to shim 3b387e5707ade3f6ec8f7ec78358739bbdd6523878579a791628ae717564b20e" address="unix:///run/containerd/s/1839bb1d627749a239117c926f4390f4b01d9733e77e5eb8e4ef8cfca46d2694" protocol=ttrpc version=3 Jul 16 00:03:37.908175 systemd[1]: Started cri-containerd-3b387e5707ade3f6ec8f7ec78358739bbdd6523878579a791628ae717564b20e.scope - libcontainer container 3b387e5707ade3f6ec8f7ec78358739bbdd6523878579a791628ae717564b20e. Jul 16 00:03:37.909493 containerd[1964]: time="2025-07-16T00:03:37.909473786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-n-fdc39dabbd,Uid:8c53ca48d5f40ac36502962cdb1b38cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a4b8202e475a5b3aa14ca8f029e7033450ce427ab455166ac9f901e99795007\"" Jul 16 00:03:37.909664 containerd[1964]: time="2025-07-16T00:03:37.909649139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-n-fdc39dabbd,Uid:df177b9bebe4b3f0b6bdad0b1b5a72dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"485dc4c250bb9e5fd9b2e178e6de28f4622f43d895ffaf2e5d0fcd04726dbc07\"" Jul 16 00:03:37.911433 containerd[1964]: time="2025-07-16T00:03:37.911417493Z" level=info msg="CreateContainer within sandbox \"485dc4c250bb9e5fd9b2e178e6de28f4622f43d895ffaf2e5d0fcd04726dbc07\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 16 00:03:37.911473 containerd[1964]: time="2025-07-16T00:03:37.911448146Z" level=info msg="CreateContainer within sandbox \"9a4b8202e475a5b3aa14ca8f029e7033450ce427ab455166ac9f901e99795007\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 16 00:03:37.914935 containerd[1964]: time="2025-07-16T00:03:37.914891604Z" level=info msg="Container 574ab7d05afedc812aa17ba17a01fc439472d155a60f71f7cca763fdb3fcc3b2: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:03:37.915435 containerd[1964]: time="2025-07-16T00:03:37.915386123Z" level=info msg="Container 3ce679cf4be92ee5e3c0bdad76dbc69239944c1ef8a89e07ec4c531701f9289e: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:03:37.918232 containerd[1964]: time="2025-07-16T00:03:37.918187880Z" level=info msg="CreateContainer within sandbox \"9a4b8202e475a5b3aa14ca8f029e7033450ce427ab455166ac9f901e99795007\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"574ab7d05afedc812aa17ba17a01fc439472d155a60f71f7cca763fdb3fcc3b2\"" Jul 16 00:03:37.918455 containerd[1964]: time="2025-07-16T00:03:37.918416422Z" level=info msg="StartContainer for \"574ab7d05afedc812aa17ba17a01fc439472d155a60f71f7cca763fdb3fcc3b2\"" Jul 16 00:03:37.918955 containerd[1964]: time="2025-07-16T00:03:37.918915436Z" level=info msg="connecting to shim 574ab7d05afedc812aa17ba17a01fc439472d155a60f71f7cca763fdb3fcc3b2" address="unix:///run/containerd/s/1ea61cedbd625ef736dec78c24c5739e679ce98f15a5a641c6ea82c96213cca3" protocol=ttrpc version=3 Jul 16 00:03:37.919210 containerd[1964]: time="2025-07-16T00:03:37.919169359Z" level=info msg="CreateContainer within sandbox \"485dc4c250bb9e5fd9b2e178e6de28f4622f43d895ffaf2e5d0fcd04726dbc07\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ce679cf4be92ee5e3c0bdad76dbc69239944c1ef8a89e07ec4c531701f9289e\"" Jul 16 00:03:37.919370 containerd[1964]: time="2025-07-16T00:03:37.919330407Z" level=info msg="StartContainer for \"3ce679cf4be92ee5e3c0bdad76dbc69239944c1ef8a89e07ec4c531701f9289e\"" Jul 16 00:03:37.919817 containerd[1964]: time="2025-07-16T00:03:37.919805858Z" level=info msg="connecting to shim 3ce679cf4be92ee5e3c0bdad76dbc69239944c1ef8a89e07ec4c531701f9289e" address="unix:///run/containerd/s/8491aced459c0bec1bef27c3e4f6ea25aac445aa45f602829c64ec2805b3edcf" protocol=ttrpc version=3 Jul 16 00:03:37.934865 systemd[1]: Started cri-containerd-3ce679cf4be92ee5e3c0bdad76dbc69239944c1ef8a89e07ec4c531701f9289e.scope - libcontainer container 3ce679cf4be92ee5e3c0bdad76dbc69239944c1ef8a89e07ec4c531701f9289e. Jul 16 00:03:37.935559 systemd[1]: Started cri-containerd-574ab7d05afedc812aa17ba17a01fc439472d155a60f71f7cca763fdb3fcc3b2.scope - libcontainer container 574ab7d05afedc812aa17ba17a01fc439472d155a60f71f7cca763fdb3fcc3b2. Jul 16 00:03:37.938048 containerd[1964]: time="2025-07-16T00:03:37.938025657Z" level=info msg="StartContainer for \"3b387e5707ade3f6ec8f7ec78358739bbdd6523878579a791628ae717564b20e\" returns successfully" Jul 16 00:03:37.964407 containerd[1964]: time="2025-07-16T00:03:37.964385319Z" level=info msg="StartContainer for \"3ce679cf4be92ee5e3c0bdad76dbc69239944c1ef8a89e07ec4c531701f9289e\" returns successfully" Jul 16 00:03:37.964505 containerd[1964]: time="2025-07-16T00:03:37.964461258Z" level=info msg="StartContainer for \"574ab7d05afedc812aa17ba17a01fc439472d155a60f71f7cca763fdb3fcc3b2\" returns successfully" Jul 16 00:03:38.114924 kubelet[2884]: I0716 00:03:38.114847 2884 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:38.740778 kubelet[2884]: E0716 00:03:38.740046 2884 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.1-n-fdc39dabbd\" not found" node="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:38.844963 kubelet[2884]: I0716 00:03:38.844880 2884 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:38.845181 kubelet[2884]: E0716 00:03:38.844988 2884 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4372.0.1-n-fdc39dabbd\": node \"ci-4372.0.1-n-fdc39dabbd\" not found" Jul 16 00:03:39.358832 kubelet[2884]: I0716 00:03:39.358744 2884 apiserver.go:52] "Watching apiserver" Jul 16 00:03:39.365967 kubelet[2884]: I0716 00:03:39.365918 2884 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 16 00:03:39.392286 kubelet[2884]: E0716 00:03:39.392205 2884 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4372.0.1-n-fdc39dabbd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:39.392903 kubelet[2884]: E0716 00:03:39.392811 2884 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4372.0.1-n-fdc39dabbd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:39.393152 kubelet[2884]: E0716 00:03:39.392954 2884 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372.0.1-n-fdc39dabbd\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:40.984196 systemd[1]: Reload requested from client PID 3201 ('systemctl') (unit session-11.scope)... Jul 16 00:03:40.984203 systemd[1]: Reloading... Jul 16 00:03:41.027841 zram_generator::config[3246]: No configuration found. Jul 16 00:03:41.091438 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 16 00:03:41.188524 systemd[1]: Reloading finished in 204 ms. Jul 16 00:03:41.203514 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:03:41.210858 systemd[1]: kubelet.service: Deactivated successfully. Jul 16 00:03:41.210971 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:03:41.211017 systemd[1]: kubelet.service: Consumed 847ms CPU time, 140.7M memory peak. Jul 16 00:03:41.212280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 16 00:03:41.471403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 16 00:03:41.473638 (kubelet)[3310]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 16 00:03:41.492482 kubelet[3310]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:03:41.492482 kubelet[3310]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 16 00:03:41.492482 kubelet[3310]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 16 00:03:41.492716 kubelet[3310]: I0716 00:03:41.492514 3310 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 16 00:03:41.496063 kubelet[3310]: I0716 00:03:41.496040 3310 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 16 00:03:41.496063 kubelet[3310]: I0716 00:03:41.496057 3310 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 16 00:03:41.496267 kubelet[3310]: I0716 00:03:41.496259 3310 server.go:934] "Client rotation is on, will bootstrap in background" Jul 16 00:03:41.497228 kubelet[3310]: I0716 00:03:41.497221 3310 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 16 00:03:41.498328 kubelet[3310]: I0716 00:03:41.498317 3310 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 16 00:03:41.499929 kubelet[3310]: I0716 00:03:41.499919 3310 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 16 00:03:41.506987 kubelet[3310]: I0716 00:03:41.506949 3310 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 16 00:03:41.507026 kubelet[3310]: I0716 00:03:41.507001 3310 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 16 00:03:41.507110 kubelet[3310]: I0716 00:03:41.507067 3310 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 16 00:03:41.507206 kubelet[3310]: I0716 00:03:41.507082 3310 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-n-fdc39dabbd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 16 00:03:41.507206 kubelet[3310]: I0716 00:03:41.507182 3310 topology_manager.go:138] "Creating topology manager with none policy" Jul 16 00:03:41.507206 kubelet[3310]: I0716 00:03:41.507188 3310 container_manager_linux.go:300] "Creating device plugin manager" Jul 16 00:03:41.507206 kubelet[3310]: I0716 00:03:41.507203 3310 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:03:41.507311 kubelet[3310]: I0716 00:03:41.507253 3310 kubelet.go:408] "Attempting to sync node with API server" Jul 16 00:03:41.507311 kubelet[3310]: I0716 00:03:41.507259 3310 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 16 00:03:41.507311 kubelet[3310]: I0716 00:03:41.507272 3310 kubelet.go:314] "Adding apiserver pod source" Jul 16 00:03:41.507311 kubelet[3310]: I0716 00:03:41.507278 3310 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 16 00:03:41.507565 kubelet[3310]: I0716 00:03:41.507553 3310 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 16 00:03:41.507820 kubelet[3310]: I0716 00:03:41.507813 3310 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 16 00:03:41.508033 kubelet[3310]: I0716 00:03:41.508025 3310 server.go:1274] "Started kubelet" Jul 16 00:03:41.508093 kubelet[3310]: I0716 00:03:41.508065 3310 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 16 00:03:41.508144 kubelet[3310]: I0716 00:03:41.508079 3310 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 16 00:03:41.508290 kubelet[3310]: I0716 00:03:41.508280 3310 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 16 00:03:41.509245 kubelet[3310]: I0716 00:03:41.509231 3310 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 16 00:03:41.509710 kubelet[3310]: I0716 00:03:41.509626 3310 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 16 00:03:41.509785 kubelet[3310]: I0716 00:03:41.509757 3310 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 16 00:03:41.509996 kubelet[3310]: I0716 00:03:41.509979 3310 server.go:449] "Adding debug handlers to kubelet server" Jul 16 00:03:41.510301 kubelet[3310]: E0716 00:03:41.509788 3310 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372.0.1-n-fdc39dabbd\" not found" Jul 16 00:03:41.510301 kubelet[3310]: I0716 00:03:41.510010 3310 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 16 00:03:41.510371 kubelet[3310]: I0716 00:03:41.510178 3310 reconciler.go:26] "Reconciler: start to sync state" Jul 16 00:03:41.510471 kubelet[3310]: I0716 00:03:41.510460 3310 factory.go:221] Registration of the systemd container factory successfully Jul 16 00:03:41.510624 kubelet[3310]: I0716 00:03:41.510557 3310 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 16 00:03:41.510624 kubelet[3310]: E0716 00:03:41.510572 3310 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 16 00:03:41.511404 kubelet[3310]: I0716 00:03:41.511394 3310 factory.go:221] Registration of the containerd container factory successfully Jul 16 00:03:41.515290 kubelet[3310]: I0716 00:03:41.515252 3310 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 16 00:03:41.516302 kubelet[3310]: I0716 00:03:41.516292 3310 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 16 00:03:41.516302 kubelet[3310]: I0716 00:03:41.516303 3310 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 16 00:03:41.516383 kubelet[3310]: I0716 00:03:41.516315 3310 kubelet.go:2321] "Starting kubelet main sync loop" Jul 16 00:03:41.516383 kubelet[3310]: E0716 00:03:41.516344 3310 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 16 00:03:41.526637 kubelet[3310]: I0716 00:03:41.526621 3310 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 16 00:03:41.526637 kubelet[3310]: I0716 00:03:41.526631 3310 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 16 00:03:41.526637 kubelet[3310]: I0716 00:03:41.526641 3310 state_mem.go:36] "Initialized new in-memory state store" Jul 16 00:03:41.526748 kubelet[3310]: I0716 00:03:41.526728 3310 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 16 00:03:41.526748 kubelet[3310]: I0716 00:03:41.526735 3310 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 16 00:03:41.526748 kubelet[3310]: I0716 00:03:41.526746 3310 policy_none.go:49] "None policy: Start" Jul 16 00:03:41.527054 kubelet[3310]: I0716 00:03:41.527017 3310 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 16 00:03:41.527054 kubelet[3310]: I0716 00:03:41.527028 3310 state_mem.go:35] "Initializing new in-memory state store" Jul 16 00:03:41.527104 kubelet[3310]: I0716 00:03:41.527098 3310 state_mem.go:75] "Updated machine memory state" Jul 16 00:03:41.529326 kubelet[3310]: I0716 00:03:41.529288 3310 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 16 00:03:41.529412 kubelet[3310]: I0716 00:03:41.529375 3310 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 16 00:03:41.529412 kubelet[3310]: I0716 00:03:41.529382 3310 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 16 00:03:41.529512 kubelet[3310]: I0716 00:03:41.529471 3310 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 16 00:03:41.625049 kubelet[3310]: W0716 00:03:41.624971 3310 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:03:41.625761 kubelet[3310]: W0716 00:03:41.625703 3310 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:03:41.625969 kubelet[3310]: W0716 00:03:41.625853 3310 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:03:41.636023 kubelet[3310]: I0716 00:03:41.635936 3310 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:41.659585 kubelet[3310]: I0716 00:03:41.659518 3310 kubelet_node_status.go:111] "Node was previously registered" node="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:41.659859 kubelet[3310]: I0716 00:03:41.659690 3310 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:41.811991 kubelet[3310]: I0716 00:03:41.811746 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df177b9bebe4b3f0b6bdad0b1b5a72dc-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-n-fdc39dabbd\" (UID: \"df177b9bebe4b3f0b6bdad0b1b5a72dc\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:41.811991 kubelet[3310]: I0716 00:03:41.811917 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df177b9bebe4b3f0b6bdad0b1b5a72dc-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-n-fdc39dabbd\" (UID: \"df177b9bebe4b3f0b6bdad0b1b5a72dc\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:41.812424 kubelet[3310]: I0716 00:03:41.812023 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df177b9bebe4b3f0b6bdad0b1b5a72dc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-n-fdc39dabbd\" (UID: \"df177b9bebe4b3f0b6bdad0b1b5a72dc\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:41.812424 kubelet[3310]: I0716 00:03:41.812120 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5c1ec2c8105f1924b0dfdbd6048942d-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-n-fdc39dabbd\" (UID: \"c5c1ec2c8105f1924b0dfdbd6048942d\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:41.812424 kubelet[3310]: I0716 00:03:41.812196 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5c1ec2c8105f1924b0dfdbd6048942d-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-n-fdc39dabbd\" (UID: \"c5c1ec2c8105f1924b0dfdbd6048942d\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:41.812424 kubelet[3310]: I0716 00:03:41.812263 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5c1ec2c8105f1924b0dfdbd6048942d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-n-fdc39dabbd\" (UID: \"c5c1ec2c8105f1924b0dfdbd6048942d\") " pod="kube-system/kube-apiserver-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:41.812424 kubelet[3310]: I0716 00:03:41.812340 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df177b9bebe4b3f0b6bdad0b1b5a72dc-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-fdc39dabbd\" (UID: \"df177b9bebe4b3f0b6bdad0b1b5a72dc\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:41.813265 kubelet[3310]: I0716 00:03:41.812405 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df177b9bebe4b3f0b6bdad0b1b5a72dc-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-n-fdc39dabbd\" (UID: \"df177b9bebe4b3f0b6bdad0b1b5a72dc\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:41.813265 kubelet[3310]: I0716 00:03:41.812470 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c53ca48d5f40ac36502962cdb1b38cd-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-n-fdc39dabbd\" (UID: \"8c53ca48d5f40ac36502962cdb1b38cd\") " pod="kube-system/kube-scheduler-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:42.508514 kubelet[3310]: I0716 00:03:42.508431 3310 apiserver.go:52] "Watching apiserver" Jul 16 00:03:42.532358 kubelet[3310]: W0716 00:03:42.532290 3310 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 16 00:03:42.532617 kubelet[3310]: E0716 00:03:42.532442 3310 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4372.0.1-n-fdc39dabbd\" already exists" pod="kube-system/kube-scheduler-ci-4372.0.1-n-fdc39dabbd" Jul 16 00:03:42.551246 kubelet[3310]: I0716 00:03:42.551177 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.1-n-fdc39dabbd" podStartSLOduration=1.5511273540000001 podStartE2EDuration="1.551127354s" podCreationTimestamp="2025-07-16 00:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:03:42.551091893 +0000 UTC m=+1.075577595" watchObservedRunningTime="2025-07-16 00:03:42.551127354 +0000 UTC m=+1.075613055" Jul 16 00:03:42.560925 kubelet[3310]: I0716 00:03:42.560893 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.1-n-fdc39dabbd" podStartSLOduration=1.5608778110000001 podStartE2EDuration="1.560877811s" podCreationTimestamp="2025-07-16 00:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:03:42.556806828 +0000 UTC m=+1.081292525" watchObservedRunningTime="2025-07-16 00:03:42.560877811 +0000 UTC m=+1.085363500" Jul 16 00:03:42.565331 kubelet[3310]: I0716 00:03:42.565308 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.1-n-fdc39dabbd" podStartSLOduration=1.565298953 podStartE2EDuration="1.565298953s" podCreationTimestamp="2025-07-16 00:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:03:42.560944304 +0000 UTC m=+1.085429996" watchObservedRunningTime="2025-07-16 00:03:42.565298953 +0000 UTC m=+1.089784644" Jul 16 00:03:42.611208 kubelet[3310]: I0716 00:03:42.611163 3310 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 16 00:03:46.497171 kubelet[3310]: I0716 00:03:46.497094 3310 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 16 00:03:46.498116 containerd[1964]: time="2025-07-16T00:03:46.497899977Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 16 00:03:46.498846 kubelet[3310]: I0716 00:03:46.498359 3310 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 16 00:03:46.994956 systemd[1]: Created slice kubepods-besteffort-pod61a8de6e_115a_4f47_ae77_4131e8ee8669.slice - libcontainer container kubepods-besteffort-pod61a8de6e_115a_4f47_ae77_4131e8ee8669.slice. Jul 16 00:03:47.045196 kubelet[3310]: I0716 00:03:47.045079 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/61a8de6e-115a-4f47-ae77-4131e8ee8669-kube-proxy\") pod \"kube-proxy-rdc6q\" (UID: \"61a8de6e-115a-4f47-ae77-4131e8ee8669\") " pod="kube-system/kube-proxy-rdc6q" Jul 16 00:03:47.045196 kubelet[3310]: I0716 00:03:47.045182 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61a8de6e-115a-4f47-ae77-4131e8ee8669-xtables-lock\") pod \"kube-proxy-rdc6q\" (UID: \"61a8de6e-115a-4f47-ae77-4131e8ee8669\") " pod="kube-system/kube-proxy-rdc6q" Jul 16 00:03:47.045546 kubelet[3310]: I0716 00:03:47.045240 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvcnr\" (UniqueName: \"kubernetes.io/projected/61a8de6e-115a-4f47-ae77-4131e8ee8669-kube-api-access-hvcnr\") pod \"kube-proxy-rdc6q\" (UID: \"61a8de6e-115a-4f47-ae77-4131e8ee8669\") " pod="kube-system/kube-proxy-rdc6q" Jul 16 00:03:47.045546 kubelet[3310]: I0716 00:03:47.045346 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61a8de6e-115a-4f47-ae77-4131e8ee8669-lib-modules\") pod \"kube-proxy-rdc6q\" (UID: \"61a8de6e-115a-4f47-ae77-4131e8ee8669\") " pod="kube-system/kube-proxy-rdc6q" Jul 16 00:03:47.315054 containerd[1964]: time="2025-07-16T00:03:47.314811370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rdc6q,Uid:61a8de6e-115a-4f47-ae77-4131e8ee8669,Namespace:kube-system,Attempt:0,}" Jul 16 00:03:47.323888 containerd[1964]: time="2025-07-16T00:03:47.323842418Z" level=info msg="connecting to shim 417f8a259c835d422873f93eb0cc92c3b22cbe439a994da8313b8d96358b2b88" address="unix:///run/containerd/s/f5188db11b375714f45f496c6dee924a8142b495de84130d763c54dca2299fa4" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:03:47.340080 systemd[1]: Started cri-containerd-417f8a259c835d422873f93eb0cc92c3b22cbe439a994da8313b8d96358b2b88.scope - libcontainer container 417f8a259c835d422873f93eb0cc92c3b22cbe439a994da8313b8d96358b2b88. Jul 16 00:03:47.350802 containerd[1964]: time="2025-07-16T00:03:47.350780211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rdc6q,Uid:61a8de6e-115a-4f47-ae77-4131e8ee8669,Namespace:kube-system,Attempt:0,} returns sandbox id \"417f8a259c835d422873f93eb0cc92c3b22cbe439a994da8313b8d96358b2b88\"" Jul 16 00:03:47.352096 containerd[1964]: time="2025-07-16T00:03:47.352082797Z" level=info msg="CreateContainer within sandbox \"417f8a259c835d422873f93eb0cc92c3b22cbe439a994da8313b8d96358b2b88\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 16 00:03:47.355914 containerd[1964]: time="2025-07-16T00:03:47.355871491Z" level=info msg="Container fb2d8dbc267d4527066b04144b1c7149dcf4ff6a795af08a7188620b29be630d: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:03:47.358154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4214175830.mount: Deactivated successfully. Jul 16 00:03:47.359605 containerd[1964]: time="2025-07-16T00:03:47.359564748Z" level=info msg="CreateContainer within sandbox \"417f8a259c835d422873f93eb0cc92c3b22cbe439a994da8313b8d96358b2b88\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb2d8dbc267d4527066b04144b1c7149dcf4ff6a795af08a7188620b29be630d\"" Jul 16 00:03:47.360117 containerd[1964]: time="2025-07-16T00:03:47.360088240Z" level=info msg="StartContainer for \"fb2d8dbc267d4527066b04144b1c7149dcf4ff6a795af08a7188620b29be630d\"" Jul 16 00:03:47.360901 containerd[1964]: time="2025-07-16T00:03:47.360890035Z" level=info msg="connecting to shim fb2d8dbc267d4527066b04144b1c7149dcf4ff6a795af08a7188620b29be630d" address="unix:///run/containerd/s/f5188db11b375714f45f496c6dee924a8142b495de84130d763c54dca2299fa4" protocol=ttrpc version=3 Jul 16 00:03:47.383926 systemd[1]: Started cri-containerd-fb2d8dbc267d4527066b04144b1c7149dcf4ff6a795af08a7188620b29be630d.scope - libcontainer container fb2d8dbc267d4527066b04144b1c7149dcf4ff6a795af08a7188620b29be630d. Jul 16 00:03:47.406746 containerd[1964]: time="2025-07-16T00:03:47.406719900Z" level=info msg="StartContainer for \"fb2d8dbc267d4527066b04144b1c7149dcf4ff6a795af08a7188620b29be630d\" returns successfully" Jul 16 00:03:47.541803 kubelet[3310]: I0716 00:03:47.541767 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rdc6q" podStartSLOduration=1.541750398 podStartE2EDuration="1.541750398s" podCreationTimestamp="2025-07-16 00:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:03:47.54164636 +0000 UTC m=+6.066132048" watchObservedRunningTime="2025-07-16 00:03:47.541750398 +0000 UTC m=+6.066236086" Jul 16 00:03:47.546236 systemd[1]: Created slice kubepods-besteffort-pod281555a6_589b_4fc4_8889_1559fd9826c1.slice - libcontainer container kubepods-besteffort-pod281555a6_589b_4fc4_8889_1559fd9826c1.slice. Jul 16 00:03:47.647571 kubelet[3310]: I0716 00:03:47.647370 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/281555a6-589b-4fc4-8889-1559fd9826c1-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-hbdrx\" (UID: \"281555a6-589b-4fc4-8889-1559fd9826c1\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-hbdrx" Jul 16 00:03:47.647571 kubelet[3310]: I0716 00:03:47.647474 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68zzz\" (UniqueName: \"kubernetes.io/projected/281555a6-589b-4fc4-8889-1559fd9826c1-kube-api-access-68zzz\") pod \"tigera-operator-5bf8dfcb4-hbdrx\" (UID: \"281555a6-589b-4fc4-8889-1559fd9826c1\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-hbdrx" Jul 16 00:03:47.848286 containerd[1964]: time="2025-07-16T00:03:47.848254490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-hbdrx,Uid:281555a6-589b-4fc4-8889-1559fd9826c1,Namespace:tigera-operator,Attempt:0,}" Jul 16 00:03:47.855345 containerd[1964]: time="2025-07-16T00:03:47.855289241Z" level=info msg="connecting to shim 10aaa789ccd9aa22ba77bca6046d7050131fe703373107eda07743ae163213f0" address="unix:///run/containerd/s/e2c3722510256ff9abb1e9a6ac5b68184d5911a27458d1051f8a977191e1f946" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:03:47.893066 systemd[1]: Started cri-containerd-10aaa789ccd9aa22ba77bca6046d7050131fe703373107eda07743ae163213f0.scope - libcontainer container 10aaa789ccd9aa22ba77bca6046d7050131fe703373107eda07743ae163213f0. Jul 16 00:03:47.938947 containerd[1964]: time="2025-07-16T00:03:47.938886696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-hbdrx,Uid:281555a6-589b-4fc4-8889-1559fd9826c1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"10aaa789ccd9aa22ba77bca6046d7050131fe703373107eda07743ae163213f0\"" Jul 16 00:03:47.939637 containerd[1964]: time="2025-07-16T00:03:47.939623383Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 16 00:03:49.684011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2781876394.mount: Deactivated successfully. Jul 16 00:03:49.994938 containerd[1964]: time="2025-07-16T00:03:49.994885723Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:49.995163 containerd[1964]: time="2025-07-16T00:03:49.995060723Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 16 00:03:49.995353 containerd[1964]: time="2025-07-16T00:03:49.995311544Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:49.996284 containerd[1964]: time="2025-07-16T00:03:49.996242419Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:49.996698 containerd[1964]: time="2025-07-16T00:03:49.996664349Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.057022544s" Jul 16 00:03:49.996698 containerd[1964]: time="2025-07-16T00:03:49.996678818Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 16 00:03:49.997603 containerd[1964]: time="2025-07-16T00:03:49.997592997Z" level=info msg="CreateContainer within sandbox \"10aaa789ccd9aa22ba77bca6046d7050131fe703373107eda07743ae163213f0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 16 00:03:50.000197 containerd[1964]: time="2025-07-16T00:03:50.000154945Z" level=info msg="Container 72e6e861ea9ed8a7b5135dfd99a816845006d566893aa35f1693df0102b30dcb: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:03:50.002156 containerd[1964]: time="2025-07-16T00:03:50.002118971Z" level=info msg="CreateContainer within sandbox \"10aaa789ccd9aa22ba77bca6046d7050131fe703373107eda07743ae163213f0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"72e6e861ea9ed8a7b5135dfd99a816845006d566893aa35f1693df0102b30dcb\"" Jul 16 00:03:50.002364 containerd[1964]: time="2025-07-16T00:03:50.002325643Z" level=info msg="StartContainer for \"72e6e861ea9ed8a7b5135dfd99a816845006d566893aa35f1693df0102b30dcb\"" Jul 16 00:03:50.002721 containerd[1964]: time="2025-07-16T00:03:50.002685894Z" level=info msg="connecting to shim 72e6e861ea9ed8a7b5135dfd99a816845006d566893aa35f1693df0102b30dcb" address="unix:///run/containerd/s/e2c3722510256ff9abb1e9a6ac5b68184d5911a27458d1051f8a977191e1f946" protocol=ttrpc version=3 Jul 16 00:03:50.026024 systemd[1]: Started cri-containerd-72e6e861ea9ed8a7b5135dfd99a816845006d566893aa35f1693df0102b30dcb.scope - libcontainer container 72e6e861ea9ed8a7b5135dfd99a816845006d566893aa35f1693df0102b30dcb. Jul 16 00:03:50.039080 containerd[1964]: time="2025-07-16T00:03:50.039059482Z" level=info msg="StartContainer for \"72e6e861ea9ed8a7b5135dfd99a816845006d566893aa35f1693df0102b30dcb\" returns successfully" Jul 16 00:03:50.557540 kubelet[3310]: I0716 00:03:50.557458 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-hbdrx" podStartSLOduration=1.499806375 podStartE2EDuration="3.557432434s" podCreationTimestamp="2025-07-16 00:03:47 +0000 UTC" firstStartedPulling="2025-07-16 00:03:47.939421787 +0000 UTC m=+6.463907475" lastFinishedPulling="2025-07-16 00:03:49.997047846 +0000 UTC m=+8.521533534" observedRunningTime="2025-07-16 00:03:50.557358752 +0000 UTC m=+9.081844440" watchObservedRunningTime="2025-07-16 00:03:50.557432434 +0000 UTC m=+9.081918120" Jul 16 00:03:54.511115 sudo[2262]: pam_unix(sudo:session): session closed for user root Jul 16 00:03:54.511942 sshd[2261]: Connection closed by 147.75.109.163 port 58336 Jul 16 00:03:54.512106 sshd-session[2259]: pam_unix(sshd:session): session closed for user core Jul 16 00:03:54.516578 systemd-logind[1947]: Session 11 logged out. Waiting for processes to exit. Jul 16 00:03:54.516805 systemd[1]: sshd@8-147.75.203.227:22-147.75.109.163:58336.service: Deactivated successfully. Jul 16 00:03:54.517968 systemd[1]: session-11.scope: Deactivated successfully. Jul 16 00:03:54.518116 systemd[1]: session-11.scope: Consumed 3.306s CPU time, 232.1M memory peak. Jul 16 00:03:54.519388 systemd-logind[1947]: Removed session 11. Jul 16 00:03:55.786201 update_engine[1952]: I20250716 00:03:55.786069 1952 update_attempter.cc:509] Updating boot flags... Jul 16 00:03:57.029170 systemd[1]: Created slice kubepods-besteffort-pod103c2c20_b68e_47e6_982f_744e524fd49e.slice - libcontainer container kubepods-besteffort-pod103c2c20_b68e_47e6_982f_744e524fd49e.slice. Jul 16 00:03:57.112021 kubelet[3310]: I0716 00:03:57.111911 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5dqn\" (UniqueName: \"kubernetes.io/projected/103c2c20-b68e-47e6-982f-744e524fd49e-kube-api-access-f5dqn\") pod \"calico-typha-5c6884466c-z4k9d\" (UID: \"103c2c20-b68e-47e6-982f-744e524fd49e\") " pod="calico-system/calico-typha-5c6884466c-z4k9d" Jul 16 00:03:57.112021 kubelet[3310]: I0716 00:03:57.112008 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/103c2c20-b68e-47e6-982f-744e524fd49e-typha-certs\") pod \"calico-typha-5c6884466c-z4k9d\" (UID: \"103c2c20-b68e-47e6-982f-744e524fd49e\") " pod="calico-system/calico-typha-5c6884466c-z4k9d" Jul 16 00:03:57.112889 kubelet[3310]: I0716 00:03:57.112067 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/103c2c20-b68e-47e6-982f-744e524fd49e-tigera-ca-bundle\") pod \"calico-typha-5c6884466c-z4k9d\" (UID: \"103c2c20-b68e-47e6-982f-744e524fd49e\") " pod="calico-system/calico-typha-5c6884466c-z4k9d" Jul 16 00:03:57.284005 systemd[1]: Created slice kubepods-besteffort-pod6331e9f2_32dc_40cf_9910_c5d467a26949.slice - libcontainer container kubepods-besteffort-pod6331e9f2_32dc_40cf_9910_c5d467a26949.slice. Jul 16 00:03:57.312658 kubelet[3310]: I0716 00:03:57.312572 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6331e9f2-32dc-40cf-9910-c5d467a26949-cni-bin-dir\") pod \"calico-node-9nn5h\" (UID: \"6331e9f2-32dc-40cf-9910-c5d467a26949\") " pod="calico-system/calico-node-9nn5h" Jul 16 00:03:57.312658 kubelet[3310]: I0716 00:03:57.312623 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6331e9f2-32dc-40cf-9910-c5d467a26949-flexvol-driver-host\") pod \"calico-node-9nn5h\" (UID: \"6331e9f2-32dc-40cf-9910-c5d467a26949\") " pod="calico-system/calico-node-9nn5h" Jul 16 00:03:57.312658 kubelet[3310]: I0716 00:03:57.312655 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6331e9f2-32dc-40cf-9910-c5d467a26949-cni-log-dir\") pod \"calico-node-9nn5h\" (UID: \"6331e9f2-32dc-40cf-9910-c5d467a26949\") " pod="calico-system/calico-node-9nn5h" Jul 16 00:03:57.312940 kubelet[3310]: I0716 00:03:57.312681 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6331e9f2-32dc-40cf-9910-c5d467a26949-cni-net-dir\") pod \"calico-node-9nn5h\" (UID: \"6331e9f2-32dc-40cf-9910-c5d467a26949\") " pod="calico-system/calico-node-9nn5h" Jul 16 00:03:57.312940 kubelet[3310]: I0716 00:03:57.312704 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6331e9f2-32dc-40cf-9910-c5d467a26949-lib-modules\") pod \"calico-node-9nn5h\" (UID: \"6331e9f2-32dc-40cf-9910-c5d467a26949\") " pod="calico-system/calico-node-9nn5h" Jul 16 00:03:57.312940 kubelet[3310]: I0716 00:03:57.312734 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6331e9f2-32dc-40cf-9910-c5d467a26949-var-lib-calico\") pod \"calico-node-9nn5h\" (UID: \"6331e9f2-32dc-40cf-9910-c5d467a26949\") " pod="calico-system/calico-node-9nn5h" Jul 16 00:03:57.312940 kubelet[3310]: I0716 00:03:57.312758 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6331e9f2-32dc-40cf-9910-c5d467a26949-tigera-ca-bundle\") pod \"calico-node-9nn5h\" (UID: \"6331e9f2-32dc-40cf-9910-c5d467a26949\") " pod="calico-system/calico-node-9nn5h" Jul 16 00:03:57.312940 kubelet[3310]: I0716 00:03:57.312797 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6331e9f2-32dc-40cf-9910-c5d467a26949-xtables-lock\") pod \"calico-node-9nn5h\" (UID: \"6331e9f2-32dc-40cf-9910-c5d467a26949\") " pod="calico-system/calico-node-9nn5h" Jul 16 00:03:57.313160 kubelet[3310]: I0716 00:03:57.312820 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6331e9f2-32dc-40cf-9910-c5d467a26949-node-certs\") pod \"calico-node-9nn5h\" (UID: \"6331e9f2-32dc-40cf-9910-c5d467a26949\") " pod="calico-system/calico-node-9nn5h" Jul 16 00:03:57.313160 kubelet[3310]: I0716 00:03:57.312844 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6331e9f2-32dc-40cf-9910-c5d467a26949-policysync\") pod \"calico-node-9nn5h\" (UID: \"6331e9f2-32dc-40cf-9910-c5d467a26949\") " pod="calico-system/calico-node-9nn5h" Jul 16 00:03:57.313160 kubelet[3310]: I0716 00:03:57.312869 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6331e9f2-32dc-40cf-9910-c5d467a26949-var-run-calico\") pod \"calico-node-9nn5h\" (UID: \"6331e9f2-32dc-40cf-9910-c5d467a26949\") " pod="calico-system/calico-node-9nn5h" Jul 16 00:03:57.313160 kubelet[3310]: I0716 00:03:57.312892 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnsd8\" (UniqueName: \"kubernetes.io/projected/6331e9f2-32dc-40cf-9910-c5d467a26949-kube-api-access-bnsd8\") pod \"calico-node-9nn5h\" (UID: \"6331e9f2-32dc-40cf-9910-c5d467a26949\") " pod="calico-system/calico-node-9nn5h" Jul 16 00:03:57.332913 containerd[1964]: time="2025-07-16T00:03:57.332831623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c6884466c-z4k9d,Uid:103c2c20-b68e-47e6-982f-744e524fd49e,Namespace:calico-system,Attempt:0,}" Jul 16 00:03:57.340496 containerd[1964]: time="2025-07-16T00:03:57.340443019Z" level=info msg="connecting to shim 8ed280399c4debd44ef1dd4de320c387c6ec58d59b74eaab5681338465a4dc67" address="unix:///run/containerd/s/fd1eaa5d8848f04132d8594197b8f25c9399766ccef4e243de04d7abff37a67c" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:03:57.364241 systemd[1]: Started cri-containerd-8ed280399c4debd44ef1dd4de320c387c6ec58d59b74eaab5681338465a4dc67.scope - libcontainer container 8ed280399c4debd44ef1dd4de320c387c6ec58d59b74eaab5681338465a4dc67. Jul 16 00:03:57.401248 containerd[1964]: time="2025-07-16T00:03:57.401228179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c6884466c-z4k9d,Uid:103c2c20-b68e-47e6-982f-744e524fd49e,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ed280399c4debd44ef1dd4de320c387c6ec58d59b74eaab5681338465a4dc67\"" Jul 16 00:03:57.401881 containerd[1964]: time="2025-07-16T00:03:57.401871510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 16 00:03:57.414104 kubelet[3310]: E0716 00:03:57.414089 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.414104 kubelet[3310]: W0716 00:03:57.414102 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.414186 kubelet[3310]: E0716 00:03:57.414117 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.415050 kubelet[3310]: E0716 00:03:57.415008 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.415050 kubelet[3310]: W0716 00:03:57.415016 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.415050 kubelet[3310]: E0716 00:03:57.415044 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.418367 kubelet[3310]: E0716 00:03:57.418340 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.418436 kubelet[3310]: W0716 00:03:57.418361 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.418436 kubelet[3310]: E0716 00:03:57.418402 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.508644 kubelet[3310]: E0716 00:03:57.508534 3310 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lx85x" podUID="d952e491-a482-4114-a7aa-287f7c6c93c7" Jul 16 00:03:57.591251 containerd[1964]: time="2025-07-16T00:03:57.591028248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9nn5h,Uid:6331e9f2-32dc-40cf-9910-c5d467a26949,Namespace:calico-system,Attempt:0,}" Jul 16 00:03:57.599468 containerd[1964]: time="2025-07-16T00:03:57.599428039Z" level=info msg="connecting to shim a2aeb2cd09f84404b28852c92d466eb588ba300bb8665f30c8508debdc88052c" address="unix:///run/containerd/s/5d9f101b91bd0faf658b8f7534f9e86647b9374265b84095376cebadfe538383" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:03:57.607418 kubelet[3310]: E0716 00:03:57.607373 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.607418 kubelet[3310]: W0716 00:03:57.607386 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.607418 kubelet[3310]: E0716 00:03:57.607401 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.607529 kubelet[3310]: E0716 00:03:57.607500 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.607529 kubelet[3310]: W0716 00:03:57.607505 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.607529 kubelet[3310]: E0716 00:03:57.607510 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.607593 kubelet[3310]: E0716 00:03:57.607587 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.607593 kubelet[3310]: W0716 00:03:57.607591 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.607631 kubelet[3310]: E0716 00:03:57.607596 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.607663 kubelet[3310]: E0716 00:03:57.607657 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.607663 kubelet[3310]: W0716 00:03:57.607662 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.607697 kubelet[3310]: E0716 00:03:57.607666 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.607735 kubelet[3310]: E0716 00:03:57.607730 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.607753 kubelet[3310]: W0716 00:03:57.607736 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.607753 kubelet[3310]: E0716 00:03:57.607742 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.607833 kubelet[3310]: E0716 00:03:57.607824 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.607833 kubelet[3310]: W0716 00:03:57.607831 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.607874 kubelet[3310]: E0716 00:03:57.607838 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.607914 kubelet[3310]: E0716 00:03:57.607908 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.607937 kubelet[3310]: W0716 00:03:57.607914 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.607937 kubelet[3310]: E0716 00:03:57.607921 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.608035 kubelet[3310]: E0716 00:03:57.608029 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.608057 kubelet[3310]: W0716 00:03:57.608036 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.608057 kubelet[3310]: E0716 00:03:57.608043 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.608120 kubelet[3310]: E0716 00:03:57.608114 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.608138 kubelet[3310]: W0716 00:03:57.608120 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.608138 kubelet[3310]: E0716 00:03:57.608125 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.608230 kubelet[3310]: E0716 00:03:57.608224 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.608250 kubelet[3310]: W0716 00:03:57.608231 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.608250 kubelet[3310]: E0716 00:03:57.608238 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.608315 kubelet[3310]: E0716 00:03:57.608310 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.608333 kubelet[3310]: W0716 00:03:57.608314 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.608333 kubelet[3310]: E0716 00:03:57.608319 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.608389 kubelet[3310]: E0716 00:03:57.608384 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.608389 kubelet[3310]: W0716 00:03:57.608388 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.608438 kubelet[3310]: E0716 00:03:57.608393 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.608464 kubelet[3310]: E0716 00:03:57.608460 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.608488 kubelet[3310]: W0716 00:03:57.608464 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.608488 kubelet[3310]: E0716 00:03:57.608468 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.608537 kubelet[3310]: E0716 00:03:57.608530 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.608537 kubelet[3310]: W0716 00:03:57.608534 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.608570 kubelet[3310]: E0716 00:03:57.608538 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.608601 kubelet[3310]: E0716 00:03:57.608595 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.608601 kubelet[3310]: W0716 00:03:57.608600 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.608637 kubelet[3310]: E0716 00:03:57.608604 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.608664 kubelet[3310]: E0716 00:03:57.608659 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.608683 kubelet[3310]: W0716 00:03:57.608664 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.608683 kubelet[3310]: E0716 00:03:57.608668 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.608733 kubelet[3310]: E0716 00:03:57.608729 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.608750 kubelet[3310]: W0716 00:03:57.608733 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.608750 kubelet[3310]: E0716 00:03:57.608737 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.608804 kubelet[3310]: E0716 00:03:57.608799 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.608804 kubelet[3310]: W0716 00:03:57.608803 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.608841 kubelet[3310]: E0716 00:03:57.608808 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.608868 kubelet[3310]: E0716 00:03:57.608864 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.608885 kubelet[3310]: W0716 00:03:57.608868 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.608885 kubelet[3310]: E0716 00:03:57.608872 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.608933 kubelet[3310]: E0716 00:03:57.608928 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.608950 kubelet[3310]: W0716 00:03:57.608933 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.608950 kubelet[3310]: E0716 00:03:57.608937 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.614163 kubelet[3310]: E0716 00:03:57.614155 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.614163 kubelet[3310]: W0716 00:03:57.614162 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.614208 kubelet[3310]: E0716 00:03:57.614168 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.614208 kubelet[3310]: I0716 00:03:57.614181 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d952e491-a482-4114-a7aa-287f7c6c93c7-kubelet-dir\") pod \"csi-node-driver-lx85x\" (UID: \"d952e491-a482-4114-a7aa-287f7c6c93c7\") " pod="calico-system/csi-node-driver-lx85x" Jul 16 00:03:57.614300 kubelet[3310]: E0716 00:03:57.614291 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.614321 kubelet[3310]: W0716 00:03:57.614299 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.614321 kubelet[3310]: E0716 00:03:57.614307 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.614321 kubelet[3310]: I0716 00:03:57.614318 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d952e491-a482-4114-a7aa-287f7c6c93c7-registration-dir\") pod \"csi-node-driver-lx85x\" (UID: \"d952e491-a482-4114-a7aa-287f7c6c93c7\") " pod="calico-system/csi-node-driver-lx85x" Jul 16 00:03:57.614430 kubelet[3310]: E0716 00:03:57.614423 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.614430 kubelet[3310]: W0716 00:03:57.614429 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.614467 kubelet[3310]: E0716 00:03:57.614435 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.614467 kubelet[3310]: I0716 00:03:57.614444 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d952e491-a482-4114-a7aa-287f7c6c93c7-varrun\") pod \"csi-node-driver-lx85x\" (UID: \"d952e491-a482-4114-a7aa-287f7c6c93c7\") " pod="calico-system/csi-node-driver-lx85x" Jul 16 00:03:57.614533 kubelet[3310]: E0716 00:03:57.614526 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.614550 kubelet[3310]: W0716 00:03:57.614534 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.614550 kubelet[3310]: E0716 00:03:57.614542 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.614610 kubelet[3310]: E0716 00:03:57.614605 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.614627 kubelet[3310]: W0716 00:03:57.614610 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.614627 kubelet[3310]: E0716 00:03:57.614616 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.614695 kubelet[3310]: E0716 00:03:57.614690 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.614712 kubelet[3310]: W0716 00:03:57.614695 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.614712 kubelet[3310]: E0716 00:03:57.614700 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.614769 kubelet[3310]: E0716 00:03:57.614760 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.614791 kubelet[3310]: W0716 00:03:57.614769 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.614791 kubelet[3310]: E0716 00:03:57.614776 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.614843 kubelet[3310]: E0716 00:03:57.614838 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.614860 kubelet[3310]: W0716 00:03:57.614843 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.614860 kubelet[3310]: E0716 00:03:57.614848 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.614860 kubelet[3310]: I0716 00:03:57.614858 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wz4p\" (UniqueName: \"kubernetes.io/projected/d952e491-a482-4114-a7aa-287f7c6c93c7-kube-api-access-4wz4p\") pod \"csi-node-driver-lx85x\" (UID: \"d952e491-a482-4114-a7aa-287f7c6c93c7\") " pod="calico-system/csi-node-driver-lx85x" Jul 16 00:03:57.614929 kubelet[3310]: E0716 00:03:57.614924 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.614948 kubelet[3310]: W0716 00:03:57.614929 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.614948 kubelet[3310]: E0716 00:03:57.614935 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.614948 kubelet[3310]: I0716 00:03:57.614942 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d952e491-a482-4114-a7aa-287f7c6c93c7-socket-dir\") pod \"csi-node-driver-lx85x\" (UID: \"d952e491-a482-4114-a7aa-287f7c6c93c7\") " pod="calico-system/csi-node-driver-lx85x" Jul 16 00:03:57.615062 kubelet[3310]: E0716 00:03:57.615055 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.615079 kubelet[3310]: W0716 00:03:57.615062 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.615079 kubelet[3310]: E0716 00:03:57.615071 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.615143 kubelet[3310]: E0716 00:03:57.615138 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.615160 kubelet[3310]: W0716 00:03:57.615144 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.615160 kubelet[3310]: E0716 00:03:57.615151 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.615232 kubelet[3310]: E0716 00:03:57.615227 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.615251 kubelet[3310]: W0716 00:03:57.615232 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.615251 kubelet[3310]: E0716 00:03:57.615238 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.615315 kubelet[3310]: E0716 00:03:57.615310 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.615332 kubelet[3310]: W0716 00:03:57.615316 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.615332 kubelet[3310]: E0716 00:03:57.615325 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.615399 kubelet[3310]: E0716 00:03:57.615394 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.615418 kubelet[3310]: W0716 00:03:57.615399 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.615418 kubelet[3310]: E0716 00:03:57.615404 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.615464 kubelet[3310]: E0716 00:03:57.615459 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.615480 kubelet[3310]: W0716 00:03:57.615463 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.615480 kubelet[3310]: E0716 00:03:57.615469 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.628049 systemd[1]: Started cri-containerd-a2aeb2cd09f84404b28852c92d466eb588ba300bb8665f30c8508debdc88052c.scope - libcontainer container a2aeb2cd09f84404b28852c92d466eb588ba300bb8665f30c8508debdc88052c. Jul 16 00:03:57.640384 containerd[1964]: time="2025-07-16T00:03:57.640361994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9nn5h,Uid:6331e9f2-32dc-40cf-9910-c5d467a26949,Namespace:calico-system,Attempt:0,} returns sandbox id \"a2aeb2cd09f84404b28852c92d466eb588ba300bb8665f30c8508debdc88052c\"" Jul 16 00:03:57.715692 kubelet[3310]: E0716 00:03:57.715640 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.715692 kubelet[3310]: W0716 00:03:57.715685 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.716069 kubelet[3310]: E0716 00:03:57.715729 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.716360 kubelet[3310]: E0716 00:03:57.716302 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.716360 kubelet[3310]: W0716 00:03:57.716338 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.716541 kubelet[3310]: E0716 00:03:57.716377 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.716903 kubelet[3310]: E0716 00:03:57.716859 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.716903 kubelet[3310]: W0716 00:03:57.716897 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.717162 kubelet[3310]: E0716 00:03:57.716944 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.717433 kubelet[3310]: E0716 00:03:57.717378 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.717433 kubelet[3310]: W0716 00:03:57.717413 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.717632 kubelet[3310]: E0716 00:03:57.717454 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.717901 kubelet[3310]: E0716 00:03:57.717860 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.717901 kubelet[3310]: W0716 00:03:57.717886 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.718135 kubelet[3310]: E0716 00:03:57.717993 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.718351 kubelet[3310]: E0716 00:03:57.718292 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.718351 kubelet[3310]: W0716 00:03:57.718319 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.718538 kubelet[3310]: E0716 00:03:57.718372 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.718664 kubelet[3310]: E0716 00:03:57.718636 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.718664 kubelet[3310]: W0716 00:03:57.718659 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.718887 kubelet[3310]: E0716 00:03:57.718719 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.719112 kubelet[3310]: E0716 00:03:57.719083 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.719112 kubelet[3310]: W0716 00:03:57.719108 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.719272 kubelet[3310]: E0716 00:03:57.719148 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.719675 kubelet[3310]: E0716 00:03:57.719648 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.719789 kubelet[3310]: W0716 00:03:57.719676 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.719789 kubelet[3310]: E0716 00:03:57.719714 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.720135 kubelet[3310]: E0716 00:03:57.720107 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.720236 kubelet[3310]: W0716 00:03:57.720133 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.720236 kubelet[3310]: E0716 00:03:57.720165 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.720546 kubelet[3310]: E0716 00:03:57.720494 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.720546 kubelet[3310]: W0716 00:03:57.720516 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.720738 kubelet[3310]: E0716 00:03:57.720597 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.720869 kubelet[3310]: E0716 00:03:57.720845 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.720869 kubelet[3310]: W0716 00:03:57.720867 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.721054 kubelet[3310]: E0716 00:03:57.720935 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.721221 kubelet[3310]: E0716 00:03:57.721166 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.721221 kubelet[3310]: W0716 00:03:57.721188 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.721518 kubelet[3310]: E0716 00:03:57.721265 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.721628 kubelet[3310]: E0716 00:03:57.721520 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.721628 kubelet[3310]: W0716 00:03:57.721542 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.721818 kubelet[3310]: E0716 00:03:57.721623 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.721936 kubelet[3310]: E0716 00:03:57.721888 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.721936 kubelet[3310]: W0716 00:03:57.721909 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.722093 kubelet[3310]: E0716 00:03:57.721942 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.722394 kubelet[3310]: E0716 00:03:57.722366 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.722484 kubelet[3310]: W0716 00:03:57.722390 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.722484 kubelet[3310]: E0716 00:03:57.722460 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.722720 kubelet[3310]: E0716 00:03:57.722691 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.722720 kubelet[3310]: W0716 00:03:57.722715 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.722969 kubelet[3310]: E0716 00:03:57.722816 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.723153 kubelet[3310]: E0716 00:03:57.723107 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.723153 kubelet[3310]: W0716 00:03:57.723130 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.723345 kubelet[3310]: E0716 00:03:57.723208 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.723506 kubelet[3310]: E0716 00:03:57.723456 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.723506 kubelet[3310]: W0716 00:03:57.723479 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.723685 kubelet[3310]: E0716 00:03:57.723549 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.723891 kubelet[3310]: E0716 00:03:57.723836 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.723891 kubelet[3310]: W0716 00:03:57.723859 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.724093 kubelet[3310]: E0716 00:03:57.723889 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.724319 kubelet[3310]: E0716 00:03:57.724293 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.724422 kubelet[3310]: W0716 00:03:57.724318 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.724422 kubelet[3310]: E0716 00:03:57.724351 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.724685 kubelet[3310]: E0716 00:03:57.724658 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.724685 kubelet[3310]: W0716 00:03:57.724682 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.724893 kubelet[3310]: E0716 00:03:57.724708 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.725117 kubelet[3310]: E0716 00:03:57.725092 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.725211 kubelet[3310]: W0716 00:03:57.725119 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.725292 kubelet[3310]: E0716 00:03:57.725187 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.725523 kubelet[3310]: E0716 00:03:57.725497 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.725612 kubelet[3310]: W0716 00:03:57.725521 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.725612 kubelet[3310]: E0716 00:03:57.725545 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.726016 kubelet[3310]: E0716 00:03:57.725991 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.726016 kubelet[3310]: W0716 00:03:57.726015 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.726205 kubelet[3310]: E0716 00:03:57.726040 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:57.736497 kubelet[3310]: E0716 00:03:57.736406 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:57.736497 kubelet[3310]: W0716 00:03:57.736439 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:57.736497 kubelet[3310]: E0716 00:03:57.736472 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:58.824842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3887121075.mount: Deactivated successfully. Jul 16 00:03:59.143877 containerd[1964]: time="2025-07-16T00:03:59.143762248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:59.144082 containerd[1964]: time="2025-07-16T00:03:59.143961291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 16 00:03:59.144409 containerd[1964]: time="2025-07-16T00:03:59.144356882Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:59.145264 containerd[1964]: time="2025-07-16T00:03:59.145223976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:03:59.145845 containerd[1964]: time="2025-07-16T00:03:59.145831453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.74394384s" Jul 16 00:03:59.145845 containerd[1964]: time="2025-07-16T00:03:59.145845769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 16 00:03:59.146345 containerd[1964]: time="2025-07-16T00:03:59.146335806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 16 00:03:59.149254 containerd[1964]: time="2025-07-16T00:03:59.149209684Z" level=info msg="CreateContainer within sandbox \"8ed280399c4debd44ef1dd4de320c387c6ec58d59b74eaab5681338465a4dc67\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 16 00:03:59.151800 containerd[1964]: time="2025-07-16T00:03:59.151756191Z" level=info msg="Container 20ff5043ada4cb747d606a44f57ffa33228276f8f3a08858e6faa813787b8050: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:03:59.154319 containerd[1964]: time="2025-07-16T00:03:59.154277729Z" level=info msg="CreateContainer within sandbox \"8ed280399c4debd44ef1dd4de320c387c6ec58d59b74eaab5681338465a4dc67\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"20ff5043ada4cb747d606a44f57ffa33228276f8f3a08858e6faa813787b8050\"" Jul 16 00:03:59.154547 containerd[1964]: time="2025-07-16T00:03:59.154491713Z" level=info msg="StartContainer for \"20ff5043ada4cb747d606a44f57ffa33228276f8f3a08858e6faa813787b8050\"" Jul 16 00:03:59.155071 containerd[1964]: time="2025-07-16T00:03:59.155032061Z" level=info msg="connecting to shim 20ff5043ada4cb747d606a44f57ffa33228276f8f3a08858e6faa813787b8050" address="unix:///run/containerd/s/fd1eaa5d8848f04132d8594197b8f25c9399766ccef4e243de04d7abff37a67c" protocol=ttrpc version=3 Jul 16 00:03:59.174920 systemd[1]: Started cri-containerd-20ff5043ada4cb747d606a44f57ffa33228276f8f3a08858e6faa813787b8050.scope - libcontainer container 20ff5043ada4cb747d606a44f57ffa33228276f8f3a08858e6faa813787b8050. Jul 16 00:03:59.203558 containerd[1964]: time="2025-07-16T00:03:59.203523231Z" level=info msg="StartContainer for \"20ff5043ada4cb747d606a44f57ffa33228276f8f3a08858e6faa813787b8050\" returns successfully" Jul 16 00:03:59.517571 kubelet[3310]: E0716 00:03:59.517459 3310 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lx85x" podUID="d952e491-a482-4114-a7aa-287f7c6c93c7" Jul 16 00:03:59.575575 kubelet[3310]: I0716 00:03:59.575517 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c6884466c-z4k9d" podStartSLOduration=0.830958499 podStartE2EDuration="2.575499678s" podCreationTimestamp="2025-07-16 00:03:57 +0000 UTC" firstStartedPulling="2025-07-16 00:03:57.401742247 +0000 UTC m=+15.926227935" lastFinishedPulling="2025-07-16 00:03:59.146283426 +0000 UTC m=+17.670769114" observedRunningTime="2025-07-16 00:03:59.575373745 +0000 UTC m=+18.099859433" watchObservedRunningTime="2025-07-16 00:03:59.575499678 +0000 UTC m=+18.099985365" Jul 16 00:03:59.626743 kubelet[3310]: E0716 00:03:59.626657 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.626743 kubelet[3310]: W0716 00:03:59.626708 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.627159 kubelet[3310]: E0716 00:03:59.626753 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.627384 kubelet[3310]: E0716 00:03:59.627349 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.627479 kubelet[3310]: W0716 00:03:59.627384 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.627479 kubelet[3310]: E0716 00:03:59.627418 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.627909 kubelet[3310]: E0716 00:03:59.627882 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.628018 kubelet[3310]: W0716 00:03:59.627909 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.628018 kubelet[3310]: E0716 00:03:59.627939 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.628465 kubelet[3310]: E0716 00:03:59.628424 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.628563 kubelet[3310]: W0716 00:03:59.628476 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.628563 kubelet[3310]: E0716 00:03:59.628528 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.629156 kubelet[3310]: E0716 00:03:59.629116 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.629294 kubelet[3310]: W0716 00:03:59.629162 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.629294 kubelet[3310]: E0716 00:03:59.629211 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.629730 kubelet[3310]: E0716 00:03:59.629697 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.629848 kubelet[3310]: W0716 00:03:59.629735 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.629848 kubelet[3310]: E0716 00:03:59.629815 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.630381 kubelet[3310]: E0716 00:03:59.630304 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.630381 kubelet[3310]: W0716 00:03:59.630344 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.630578 kubelet[3310]: E0716 00:03:59.630389 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.631025 kubelet[3310]: E0716 00:03:59.630946 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.631025 kubelet[3310]: W0716 00:03:59.630985 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.631326 kubelet[3310]: E0716 00:03:59.631031 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.631642 kubelet[3310]: E0716 00:03:59.631588 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.631642 kubelet[3310]: W0716 00:03:59.631625 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.631895 kubelet[3310]: E0716 00:03:59.631667 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.632269 kubelet[3310]: E0716 00:03:59.632188 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.632269 kubelet[3310]: W0716 00:03:59.632233 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.632571 kubelet[3310]: E0716 00:03:59.632281 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.632901 kubelet[3310]: E0716 00:03:59.632846 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.632901 kubelet[3310]: W0716 00:03:59.632887 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.633108 kubelet[3310]: E0716 00:03:59.632934 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.633538 kubelet[3310]: E0716 00:03:59.633463 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.633538 kubelet[3310]: W0716 00:03:59.633492 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.633538 kubelet[3310]: E0716 00:03:59.633523 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.634093 kubelet[3310]: E0716 00:03:59.634035 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.634093 kubelet[3310]: W0716 00:03:59.634065 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.634093 kubelet[3310]: E0716 00:03:59.634095 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.634593 kubelet[3310]: E0716 00:03:59.634537 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.634593 kubelet[3310]: W0716 00:03:59.634568 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.634832 kubelet[3310]: E0716 00:03:59.634599 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.635140 kubelet[3310]: E0716 00:03:59.635085 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.635140 kubelet[3310]: W0716 00:03:59.635125 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.635356 kubelet[3310]: E0716 00:03:59.635172 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.636040 kubelet[3310]: E0716 00:03:59.635990 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.636040 kubelet[3310]: W0716 00:03:59.636019 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.636268 kubelet[3310]: E0716 00:03:59.636050 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.636578 kubelet[3310]: E0716 00:03:59.636528 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.636578 kubelet[3310]: W0716 00:03:59.636553 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.636844 kubelet[3310]: E0716 00:03:59.636585 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.637268 kubelet[3310]: E0716 00:03:59.637190 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.637268 kubelet[3310]: W0716 00:03:59.637235 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.637514 kubelet[3310]: E0716 00:03:59.637284 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.637827 kubelet[3310]: E0716 00:03:59.637759 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.637827 kubelet[3310]: W0716 00:03:59.637808 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.638082 kubelet[3310]: E0716 00:03:59.637852 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.638347 kubelet[3310]: E0716 00:03:59.638291 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.638347 kubelet[3310]: W0716 00:03:59.638329 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.638566 kubelet[3310]: E0716 00:03:59.638375 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.638967 kubelet[3310]: E0716 00:03:59.638886 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.638967 kubelet[3310]: W0716 00:03:59.638912 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.638967 kubelet[3310]: E0716 00:03:59.638951 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.639504 kubelet[3310]: E0716 00:03:59.639429 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.639504 kubelet[3310]: W0716 00:03:59.639458 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.639504 kubelet[3310]: E0716 00:03:59.639499 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.640198 kubelet[3310]: E0716 00:03:59.640122 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.640198 kubelet[3310]: W0716 00:03:59.640151 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.640198 kubelet[3310]: E0716 00:03:59.640194 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.640656 kubelet[3310]: E0716 00:03:59.640623 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.640656 kubelet[3310]: W0716 00:03:59.640653 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.640927 kubelet[3310]: E0716 00:03:59.640694 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.641229 kubelet[3310]: E0716 00:03:59.641147 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.641229 kubelet[3310]: W0716 00:03:59.641176 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.641518 kubelet[3310]: E0716 00:03:59.641255 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.641698 kubelet[3310]: E0716 00:03:59.641668 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.641698 kubelet[3310]: W0716 00:03:59.641694 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.641991 kubelet[3310]: E0716 00:03:59.641796 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.642204 kubelet[3310]: E0716 00:03:59.642169 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.642305 kubelet[3310]: W0716 00:03:59.642207 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.642398 kubelet[3310]: E0716 00:03:59.642310 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.642681 kubelet[3310]: E0716 00:03:59.642653 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.642811 kubelet[3310]: W0716 00:03:59.642679 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.642811 kubelet[3310]: E0716 00:03:59.642714 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.643339 kubelet[3310]: E0716 00:03:59.643301 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.643440 kubelet[3310]: W0716 00:03:59.643341 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.643440 kubelet[3310]: E0716 00:03:59.643387 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.643968 kubelet[3310]: E0716 00:03:59.643916 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.643968 kubelet[3310]: W0716 00:03:59.643940 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.644279 kubelet[3310]: E0716 00:03:59.643973 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.644521 kubelet[3310]: E0716 00:03:59.644460 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.644521 kubelet[3310]: W0716 00:03:59.644496 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.644700 kubelet[3310]: E0716 00:03:59.644536 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.645117 kubelet[3310]: E0716 00:03:59.645060 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.645117 kubelet[3310]: W0716 00:03:59.645086 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.645117 kubelet[3310]: E0716 00:03:59.645115 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:03:59.645694 kubelet[3310]: E0716 00:03:59.645624 3310 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 16 00:03:59.645694 kubelet[3310]: W0716 00:03:59.645652 3310 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 16 00:03:59.645694 kubelet[3310]: E0716 00:03:59.645683 3310 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 16 00:04:00.523302 containerd[1964]: time="2025-07-16T00:04:00.523248957Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:00.523551 containerd[1964]: time="2025-07-16T00:04:00.523402097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 16 00:04:00.523816 containerd[1964]: time="2025-07-16T00:04:00.523795658Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:00.524627 containerd[1964]: time="2025-07-16T00:04:00.524612399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:00.525007 containerd[1964]: time="2025-07-16T00:04:00.524971582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.378620995s" Jul 16 00:04:00.525007 containerd[1964]: time="2025-07-16T00:04:00.524987116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 16 00:04:00.525962 containerd[1964]: time="2025-07-16T00:04:00.525949468Z" level=info msg="CreateContainer within sandbox \"a2aeb2cd09f84404b28852c92d466eb588ba300bb8665f30c8508debdc88052c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 16 00:04:00.528995 containerd[1964]: time="2025-07-16T00:04:00.528951336Z" level=info msg="Container 16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:04:00.532506 containerd[1964]: time="2025-07-16T00:04:00.532464394Z" level=info msg="CreateContainer within sandbox \"a2aeb2cd09f84404b28852c92d466eb588ba300bb8665f30c8508debdc88052c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef\"" Jul 16 00:04:00.532686 containerd[1964]: time="2025-07-16T00:04:00.532674498Z" level=info msg="StartContainer for \"16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef\"" Jul 16 00:04:00.533412 containerd[1964]: time="2025-07-16T00:04:00.533399562Z" level=info msg="connecting to shim 16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef" address="unix:///run/containerd/s/5d9f101b91bd0faf658b8f7534f9e86647b9374265b84095376cebadfe538383" protocol=ttrpc version=3 Jul 16 00:04:00.560878 systemd[1]: Started cri-containerd-16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef.scope - libcontainer container 16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef. Jul 16 00:04:00.571701 kubelet[3310]: I0716 00:04:00.571686 3310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:04:00.578571 containerd[1964]: time="2025-07-16T00:04:00.578545734Z" level=info msg="StartContainer for \"16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef\" returns successfully" Jul 16 00:04:00.582201 systemd[1]: cri-containerd-16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef.scope: Deactivated successfully. Jul 16 00:04:00.583529 containerd[1964]: time="2025-07-16T00:04:00.583512452Z" level=info msg="received exit event container_id:\"16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef\" id:\"16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef\" pid:4139 exited_at:{seconds:1752624240 nanos:583286077}" Jul 16 00:04:00.583607 containerd[1964]: time="2025-07-16T00:04:00.583594709Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef\" id:\"16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef\" pid:4139 exited_at:{seconds:1752624240 nanos:583286077}" Jul 16 00:04:00.594196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef-rootfs.mount: Deactivated successfully. Jul 16 00:04:01.518106 kubelet[3310]: E0716 00:04:01.517985 3310 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lx85x" podUID="d952e491-a482-4114-a7aa-287f7c6c93c7" Jul 16 00:04:01.580925 containerd[1964]: time="2025-07-16T00:04:01.580835163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 16 00:04:03.517498 kubelet[3310]: E0716 00:04:03.517439 3310 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lx85x" podUID="d952e491-a482-4114-a7aa-287f7c6c93c7" Jul 16 00:04:03.963323 containerd[1964]: time="2025-07-16T00:04:03.963270778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:03.963523 containerd[1964]: time="2025-07-16T00:04:03.963497984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 16 00:04:03.963826 containerd[1964]: time="2025-07-16T00:04:03.963791450Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:03.964636 containerd[1964]: time="2025-07-16T00:04:03.964598131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:03.964988 containerd[1964]: time="2025-07-16T00:04:03.964950516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.384040432s" Jul 16 00:04:03.964988 containerd[1964]: time="2025-07-16T00:04:03.964963982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 16 00:04:03.965905 containerd[1964]: time="2025-07-16T00:04:03.965893868Z" level=info msg="CreateContainer within sandbox \"a2aeb2cd09f84404b28852c92d466eb588ba300bb8665f30c8508debdc88052c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 16 00:04:03.968632 containerd[1964]: time="2025-07-16T00:04:03.968618661Z" level=info msg="Container 8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:04:03.972376 containerd[1964]: time="2025-07-16T00:04:03.972361117Z" level=info msg="CreateContainer within sandbox \"a2aeb2cd09f84404b28852c92d466eb588ba300bb8665f30c8508debdc88052c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf\"" Jul 16 00:04:03.972598 containerd[1964]: time="2025-07-16T00:04:03.972585917Z" level=info msg="StartContainer for \"8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf\"" Jul 16 00:04:03.973550 containerd[1964]: time="2025-07-16T00:04:03.973538780Z" level=info msg="connecting to shim 8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf" address="unix:///run/containerd/s/5d9f101b91bd0faf658b8f7534f9e86647b9374265b84095376cebadfe538383" protocol=ttrpc version=3 Jul 16 00:04:03.988055 systemd[1]: Started cri-containerd-8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf.scope - libcontainer container 8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf. Jul 16 00:04:04.006618 containerd[1964]: time="2025-07-16T00:04:04.006594735Z" level=info msg="StartContainer for \"8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf\" returns successfully" Jul 16 00:04:04.591323 systemd[1]: cri-containerd-8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf.scope: Deactivated successfully. Jul 16 00:04:04.591555 systemd[1]: cri-containerd-8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf.scope: Consumed 362ms CPU time, 194.2M memory peak, 171.2M written to disk. Jul 16 00:04:04.592254 containerd[1964]: time="2025-07-16T00:04:04.592234286Z" level=info msg="received exit event container_id:\"8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf\" id:\"8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf\" pid:4199 exited_at:{seconds:1752624244 nanos:592097778}" Jul 16 00:04:04.592332 containerd[1964]: time="2025-07-16T00:04:04.592240889Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf\" id:\"8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf\" pid:4199 exited_at:{seconds:1752624244 nanos:592097778}" Jul 16 00:04:04.603195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf-rootfs.mount: Deactivated successfully. Jul 16 00:04:04.621494 kubelet[3310]: I0716 00:04:04.621412 3310 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 16 00:04:04.677526 kubelet[3310]: I0716 00:04:04.677427 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/817f396a-92cf-42e2-b40f-0bd522186d23-config-volume\") pod \"coredns-7c65d6cfc9-lj477\" (UID: \"817f396a-92cf-42e2-b40f-0bd522186d23\") " pod="kube-system/coredns-7c65d6cfc9-lj477" Jul 16 00:04:04.677834 kubelet[3310]: I0716 00:04:04.677600 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gd7d\" (UniqueName: \"kubernetes.io/projected/817f396a-92cf-42e2-b40f-0bd522186d23-kube-api-access-2gd7d\") pod \"coredns-7c65d6cfc9-lj477\" (UID: \"817f396a-92cf-42e2-b40f-0bd522186d23\") " pod="kube-system/coredns-7c65d6cfc9-lj477" Jul 16 00:04:04.692493 systemd[1]: Created slice kubepods-burstable-pod817f396a_92cf_42e2_b40f_0bd522186d23.slice - libcontainer container kubepods-burstable-pod817f396a_92cf_42e2_b40f_0bd522186d23.slice. Jul 16 00:04:04.701015 systemd[1]: Created slice kubepods-burstable-pod69c1261e_5c9d_45dd_bb1d_036a7ac8a303.slice - libcontainer container kubepods-burstable-pod69c1261e_5c9d_45dd_bb1d_036a7ac8a303.slice. Jul 16 00:04:04.705453 systemd[1]: Created slice kubepods-besteffort-podf2a60695_17f2_48fd_96de_335e0509a3da.slice - libcontainer container kubepods-besteffort-podf2a60695_17f2_48fd_96de_335e0509a3da.slice. Jul 16 00:04:04.711212 systemd[1]: Created slice kubepods-besteffort-pod1b936d2a_ef32_4746_8cca_3e2a3a8494c3.slice - libcontainer container kubepods-besteffort-pod1b936d2a_ef32_4746_8cca_3e2a3a8494c3.slice. Jul 16 00:04:04.715696 systemd[1]: Created slice kubepods-besteffort-podee786be9_80e5_49c5_8020_5ce0d112fde3.slice - libcontainer container kubepods-besteffort-podee786be9_80e5_49c5_8020_5ce0d112fde3.slice. Jul 16 00:04:04.719606 systemd[1]: Created slice kubepods-besteffort-pod3cad4247_8f73_4bc9_ba43_3b74b00ae4de.slice - libcontainer container kubepods-besteffort-pod3cad4247_8f73_4bc9_ba43_3b74b00ae4de.slice. Jul 16 00:04:04.724062 systemd[1]: Created slice kubepods-besteffort-pod89629daf_1089_4d68_899e_34a27da1a4f4.slice - libcontainer container kubepods-besteffort-pod89629daf_1089_4d68_899e_34a27da1a4f4.slice. Jul 16 00:04:04.778933 kubelet[3310]: I0716 00:04:04.778855 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a60695-17f2-48fd-96de-335e0509a3da-tigera-ca-bundle\") pod \"calico-kube-controllers-7778486458-86hwh\" (UID: \"f2a60695-17f2-48fd-96de-335e0509a3da\") " pod="calico-system/calico-kube-controllers-7778486458-86hwh" Jul 16 00:04:04.779181 kubelet[3310]: I0716 00:04:04.778992 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69c1261e-5c9d-45dd-bb1d-036a7ac8a303-config-volume\") pod \"coredns-7c65d6cfc9-vbg4v\" (UID: \"69c1261e-5c9d-45dd-bb1d-036a7ac8a303\") " pod="kube-system/coredns-7c65d6cfc9-vbg4v" Jul 16 00:04:04.779325 kubelet[3310]: I0716 00:04:04.779155 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs2hs\" (UniqueName: \"kubernetes.io/projected/f2a60695-17f2-48fd-96de-335e0509a3da-kube-api-access-vs2hs\") pod \"calico-kube-controllers-7778486458-86hwh\" (UID: \"f2a60695-17f2-48fd-96de-335e0509a3da\") " pod="calico-system/calico-kube-controllers-7778486458-86hwh" Jul 16 00:04:04.779325 kubelet[3310]: I0716 00:04:04.779257 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgnwj\" (UniqueName: \"kubernetes.io/projected/3cad4247-8f73-4bc9-ba43-3b74b00ae4de-kube-api-access-fgnwj\") pod \"calico-apiserver-74b875fcc5-pv79p\" (UID: \"3cad4247-8f73-4bc9-ba43-3b74b00ae4de\") " pod="calico-apiserver/calico-apiserver-74b875fcc5-pv79p" Jul 16 00:04:04.779613 kubelet[3310]: I0716 00:04:04.779522 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3cad4247-8f73-4bc9-ba43-3b74b00ae4de-calico-apiserver-certs\") pod \"calico-apiserver-74b875fcc5-pv79p\" (UID: \"3cad4247-8f73-4bc9-ba43-3b74b00ae4de\") " pod="calico-apiserver/calico-apiserver-74b875fcc5-pv79p" Jul 16 00:04:04.779842 kubelet[3310]: I0716 00:04:04.779616 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgljj\" (UniqueName: \"kubernetes.io/projected/69c1261e-5c9d-45dd-bb1d-036a7ac8a303-kube-api-access-cgljj\") pod \"coredns-7c65d6cfc9-vbg4v\" (UID: \"69c1261e-5c9d-45dd-bb1d-036a7ac8a303\") " pod="kube-system/coredns-7c65d6cfc9-vbg4v" Jul 16 00:04:04.880975 kubelet[3310]: I0716 00:04:04.880727 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b936d2a-ef32-4746-8cca-3e2a3a8494c3-config\") pod \"goldmane-58fd7646b9-qtx46\" (UID: \"1b936d2a-ef32-4746-8cca-3e2a3a8494c3\") " pod="calico-system/goldmane-58fd7646b9-qtx46" Jul 16 00:04:04.880975 kubelet[3310]: I0716 00:04:04.880848 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89629daf-1089-4d68-899e-34a27da1a4f4-whisker-ca-bundle\") pod \"whisker-7596c76d8-gcdwn\" (UID: \"89629daf-1089-4d68-899e-34a27da1a4f4\") " pod="calico-system/whisker-7596c76d8-gcdwn" Jul 16 00:04:04.880975 kubelet[3310]: I0716 00:04:04.880906 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b936d2a-ef32-4746-8cca-3e2a3a8494c3-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-qtx46\" (UID: \"1b936d2a-ef32-4746-8cca-3e2a3a8494c3\") " pod="calico-system/goldmane-58fd7646b9-qtx46" Jul 16 00:04:04.881713 kubelet[3310]: I0716 00:04:04.881014 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1b936d2a-ef32-4746-8cca-3e2a3a8494c3-goldmane-key-pair\") pod \"goldmane-58fd7646b9-qtx46\" (UID: \"1b936d2a-ef32-4746-8cca-3e2a3a8494c3\") " pod="calico-system/goldmane-58fd7646b9-qtx46" Jul 16 00:04:04.881713 kubelet[3310]: I0716 00:04:04.881103 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p2pg\" (UniqueName: \"kubernetes.io/projected/1b936d2a-ef32-4746-8cca-3e2a3a8494c3-kube-api-access-9p2pg\") pod \"goldmane-58fd7646b9-qtx46\" (UID: \"1b936d2a-ef32-4746-8cca-3e2a3a8494c3\") " pod="calico-system/goldmane-58fd7646b9-qtx46" Jul 16 00:04:04.881713 kubelet[3310]: I0716 00:04:04.881255 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwfsv\" (UniqueName: \"kubernetes.io/projected/ee786be9-80e5-49c5-8020-5ce0d112fde3-kube-api-access-zwfsv\") pod \"calico-apiserver-74b875fcc5-9xcms\" (UID: \"ee786be9-80e5-49c5-8020-5ce0d112fde3\") " pod="calico-apiserver/calico-apiserver-74b875fcc5-9xcms" Jul 16 00:04:04.881713 kubelet[3310]: I0716 00:04:04.881300 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knsqx\" (UniqueName: \"kubernetes.io/projected/89629daf-1089-4d68-899e-34a27da1a4f4-kube-api-access-knsqx\") pod \"whisker-7596c76d8-gcdwn\" (UID: \"89629daf-1089-4d68-899e-34a27da1a4f4\") " pod="calico-system/whisker-7596c76d8-gcdwn" Jul 16 00:04:04.882445 kubelet[3310]: I0716 00:04:04.881916 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/89629daf-1089-4d68-899e-34a27da1a4f4-whisker-backend-key-pair\") pod \"whisker-7596c76d8-gcdwn\" (UID: \"89629daf-1089-4d68-899e-34a27da1a4f4\") " pod="calico-system/whisker-7596c76d8-gcdwn" Jul 16 00:04:04.882445 kubelet[3310]: I0716 00:04:04.882082 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ee786be9-80e5-49c5-8020-5ce0d112fde3-calico-apiserver-certs\") pod \"calico-apiserver-74b875fcc5-9xcms\" (UID: \"ee786be9-80e5-49c5-8020-5ce0d112fde3\") " pod="calico-apiserver/calico-apiserver-74b875fcc5-9xcms" Jul 16 00:04:04.997332 containerd[1964]: time="2025-07-16T00:04:04.997312178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lj477,Uid:817f396a-92cf-42e2-b40f-0bd522186d23,Namespace:kube-system,Attempt:0,}" Jul 16 00:04:05.004060 containerd[1964]: time="2025-07-16T00:04:05.004008736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vbg4v,Uid:69c1261e-5c9d-45dd-bb1d-036a7ac8a303,Namespace:kube-system,Attempt:0,}" Jul 16 00:04:05.008493 containerd[1964]: time="2025-07-16T00:04:05.008471772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7778486458-86hwh,Uid:f2a60695-17f2-48fd-96de-335e0509a3da,Namespace:calico-system,Attempt:0,}" Jul 16 00:04:05.014193 containerd[1964]: time="2025-07-16T00:04:05.014138918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qtx46,Uid:1b936d2a-ef32-4746-8cca-3e2a3a8494c3,Namespace:calico-system,Attempt:0,}" Jul 16 00:04:05.018734 containerd[1964]: time="2025-07-16T00:04:05.018691903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b875fcc5-9xcms,Uid:ee786be9-80e5-49c5-8020-5ce0d112fde3,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:04:05.023478 containerd[1964]: time="2025-07-16T00:04:05.023415477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b875fcc5-pv79p,Uid:3cad4247-8f73-4bc9-ba43-3b74b00ae4de,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:04:05.027799 containerd[1964]: time="2025-07-16T00:04:05.027539439Z" level=error msg="Failed to destroy network for sandbox \"15fe4fae32a042137197bf1c84d316f17eab6c1046d451ef987fff34900c735e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.027799 containerd[1964]: time="2025-07-16T00:04:05.027615377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7596c76d8-gcdwn,Uid:89629daf-1089-4d68-899e-34a27da1a4f4,Namespace:calico-system,Attempt:0,}" Jul 16 00:04:05.029275 containerd[1964]: time="2025-07-16T00:04:05.029220951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lj477,Uid:817f396a-92cf-42e2-b40f-0bd522186d23,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15fe4fae32a042137197bf1c84d316f17eab6c1046d451ef987fff34900c735e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.030234 kubelet[3310]: E0716 00:04:05.030157 3310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15fe4fae32a042137197bf1c84d316f17eab6c1046d451ef987fff34900c735e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.030364 kubelet[3310]: E0716 00:04:05.030308 3310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15fe4fae32a042137197bf1c84d316f17eab6c1046d451ef987fff34900c735e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lj477" Jul 16 00:04:05.030364 kubelet[3310]: E0716 00:04:05.030337 3310 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15fe4fae32a042137197bf1c84d316f17eab6c1046d451ef987fff34900c735e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lj477" Jul 16 00:04:05.030431 kubelet[3310]: E0716 00:04:05.030396 3310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-lj477_kube-system(817f396a-92cf-42e2-b40f-0bd522186d23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-lj477_kube-system(817f396a-92cf-42e2-b40f-0bd522186d23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15fe4fae32a042137197bf1c84d316f17eab6c1046d451ef987fff34900c735e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-lj477" podUID="817f396a-92cf-42e2-b40f-0bd522186d23" Jul 16 00:04:05.038263 containerd[1964]: time="2025-07-16T00:04:05.038221762Z" level=error msg="Failed to destroy network for sandbox \"41838bc6cc33cb96326000a4a942301c6d07832eedeb9285d1fb043feb457159\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.038940 containerd[1964]: time="2025-07-16T00:04:05.038869684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vbg4v,Uid:69c1261e-5c9d-45dd-bb1d-036a7ac8a303,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"41838bc6cc33cb96326000a4a942301c6d07832eedeb9285d1fb043feb457159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.039630 kubelet[3310]: E0716 00:04:05.039460 3310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41838bc6cc33cb96326000a4a942301c6d07832eedeb9285d1fb043feb457159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.039794 kubelet[3310]: E0716 00:04:05.039647 3310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41838bc6cc33cb96326000a4a942301c6d07832eedeb9285d1fb043feb457159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vbg4v" Jul 16 00:04:05.039794 kubelet[3310]: E0716 00:04:05.039687 3310 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41838bc6cc33cb96326000a4a942301c6d07832eedeb9285d1fb043feb457159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vbg4v" Jul 16 00:04:05.039878 kubelet[3310]: E0716 00:04:05.039799 3310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-vbg4v_kube-system(69c1261e-5c9d-45dd-bb1d-036a7ac8a303)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-vbg4v_kube-system(69c1261e-5c9d-45dd-bb1d-036a7ac8a303)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41838bc6cc33cb96326000a4a942301c6d07832eedeb9285d1fb043feb457159\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vbg4v" podUID="69c1261e-5c9d-45dd-bb1d-036a7ac8a303" Jul 16 00:04:05.042966 containerd[1964]: time="2025-07-16T00:04:05.042925845Z" level=error msg="Failed to destroy network for sandbox \"24d2fdc9f3af0d66978d046f3dc0e9ba41de1ec00a4c0f089df453282a345a52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.043484 containerd[1964]: time="2025-07-16T00:04:05.043458090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7778486458-86hwh,Uid:f2a60695-17f2-48fd-96de-335e0509a3da,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"24d2fdc9f3af0d66978d046f3dc0e9ba41de1ec00a4c0f089df453282a345a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.043733 kubelet[3310]: E0716 00:04:05.043700 3310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24d2fdc9f3af0d66978d046f3dc0e9ba41de1ec00a4c0f089df453282a345a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.043791 kubelet[3310]: E0716 00:04:05.043770 3310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24d2fdc9f3af0d66978d046f3dc0e9ba41de1ec00a4c0f089df453282a345a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7778486458-86hwh" Jul 16 00:04:05.043827 kubelet[3310]: E0716 00:04:05.043791 3310 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24d2fdc9f3af0d66978d046f3dc0e9ba41de1ec00a4c0f089df453282a345a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7778486458-86hwh" Jul 16 00:04:05.043870 kubelet[3310]: E0716 00:04:05.043835 3310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7778486458-86hwh_calico-system(f2a60695-17f2-48fd-96de-335e0509a3da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7778486458-86hwh_calico-system(f2a60695-17f2-48fd-96de-335e0509a3da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24d2fdc9f3af0d66978d046f3dc0e9ba41de1ec00a4c0f089df453282a345a52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7778486458-86hwh" podUID="f2a60695-17f2-48fd-96de-335e0509a3da" Jul 16 00:04:05.052965 containerd[1964]: time="2025-07-16T00:04:05.052927527Z" level=error msg="Failed to destroy network for sandbox \"b5bb9cbe56729fbc381efbc16d112de4fde6c696dc1b6049f0a6ae411bbe530f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.053389 containerd[1964]: time="2025-07-16T00:04:05.053374900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qtx46,Uid:1b936d2a-ef32-4746-8cca-3e2a3a8494c3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5bb9cbe56729fbc381efbc16d112de4fde6c696dc1b6049f0a6ae411bbe530f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.053520 kubelet[3310]: E0716 00:04:05.053498 3310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5bb9cbe56729fbc381efbc16d112de4fde6c696dc1b6049f0a6ae411bbe530f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.053550 kubelet[3310]: E0716 00:04:05.053543 3310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5bb9cbe56729fbc381efbc16d112de4fde6c696dc1b6049f0a6ae411bbe530f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-qtx46" Jul 16 00:04:05.053576 kubelet[3310]: E0716 00:04:05.053557 3310 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5bb9cbe56729fbc381efbc16d112de4fde6c696dc1b6049f0a6ae411bbe530f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-qtx46" Jul 16 00:04:05.053597 kubelet[3310]: E0716 00:04:05.053586 3310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-qtx46_calico-system(1b936d2a-ef32-4746-8cca-3e2a3a8494c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-qtx46_calico-system(1b936d2a-ef32-4746-8cca-3e2a3a8494c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5bb9cbe56729fbc381efbc16d112de4fde6c696dc1b6049f0a6ae411bbe530f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-qtx46" podUID="1b936d2a-ef32-4746-8cca-3e2a3a8494c3" Jul 16 00:04:05.054857 containerd[1964]: time="2025-07-16T00:04:05.054836116Z" level=error msg="Failed to destroy network for sandbox \"36c130431a359bfa8fa8e345b1159b3e394083504a74327ba0fab96547b98d83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.055272 containerd[1964]: time="2025-07-16T00:04:05.055252279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b875fcc5-9xcms,Uid:ee786be9-80e5-49c5-8020-5ce0d112fde3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"36c130431a359bfa8fa8e345b1159b3e394083504a74327ba0fab96547b98d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.055448 kubelet[3310]: E0716 00:04:05.055426 3310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36c130431a359bfa8fa8e345b1159b3e394083504a74327ba0fab96547b98d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.055509 kubelet[3310]: E0716 00:04:05.055475 3310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36c130431a359bfa8fa8e345b1159b3e394083504a74327ba0fab96547b98d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b875fcc5-9xcms" Jul 16 00:04:05.055538 kubelet[3310]: E0716 00:04:05.055508 3310 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36c130431a359bfa8fa8e345b1159b3e394083504a74327ba0fab96547b98d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b875fcc5-9xcms" Jul 16 00:04:05.055587 kubelet[3310]: E0716 00:04:05.055557 3310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b875fcc5-9xcms_calico-apiserver(ee786be9-80e5-49c5-8020-5ce0d112fde3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b875fcc5-9xcms_calico-apiserver(ee786be9-80e5-49c5-8020-5ce0d112fde3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36c130431a359bfa8fa8e345b1159b3e394083504a74327ba0fab96547b98d83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b875fcc5-9xcms" podUID="ee786be9-80e5-49c5-8020-5ce0d112fde3" Jul 16 00:04:05.058084 containerd[1964]: time="2025-07-16T00:04:05.058062458Z" level=error msg="Failed to destroy network for sandbox \"308db7144f5a73e17d53c4188ddb5f94c6272675f67f6355ea801a3581142f69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.058545 containerd[1964]: time="2025-07-16T00:04:05.058499318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b875fcc5-pv79p,Uid:3cad4247-8f73-4bc9-ba43-3b74b00ae4de,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"308db7144f5a73e17d53c4188ddb5f94c6272675f67f6355ea801a3581142f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.058647 kubelet[3310]: E0716 00:04:05.058631 3310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308db7144f5a73e17d53c4188ddb5f94c6272675f67f6355ea801a3581142f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.058674 kubelet[3310]: E0716 00:04:05.058661 3310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308db7144f5a73e17d53c4188ddb5f94c6272675f67f6355ea801a3581142f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b875fcc5-pv79p" Jul 16 00:04:05.058694 kubelet[3310]: E0716 00:04:05.058673 3310 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"308db7144f5a73e17d53c4188ddb5f94c6272675f67f6355ea801a3581142f69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74b875fcc5-pv79p" Jul 16 00:04:05.058712 kubelet[3310]: E0716 00:04:05.058697 3310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74b875fcc5-pv79p_calico-apiserver(3cad4247-8f73-4bc9-ba43-3b74b00ae4de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74b875fcc5-pv79p_calico-apiserver(3cad4247-8f73-4bc9-ba43-3b74b00ae4de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"308db7144f5a73e17d53c4188ddb5f94c6272675f67f6355ea801a3581142f69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74b875fcc5-pv79p" podUID="3cad4247-8f73-4bc9-ba43-3b74b00ae4de" Jul 16 00:04:05.061906 containerd[1964]: time="2025-07-16T00:04:05.061858797Z" level=error msg="Failed to destroy network for sandbox \"989f5cee808b4b996971a448f9a4d3f0223ab84957e41e3c7622f6891c543525\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.062287 containerd[1964]: time="2025-07-16T00:04:05.062243938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7596c76d8-gcdwn,Uid:89629daf-1089-4d68-899e-34a27da1a4f4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"989f5cee808b4b996971a448f9a4d3f0223ab84957e41e3c7622f6891c543525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.062335 kubelet[3310]: E0716 00:04:05.062323 3310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"989f5cee808b4b996971a448f9a4d3f0223ab84957e41e3c7622f6891c543525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.062359 kubelet[3310]: E0716 00:04:05.062346 3310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"989f5cee808b4b996971a448f9a4d3f0223ab84957e41e3c7622f6891c543525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7596c76d8-gcdwn" Jul 16 00:04:05.062382 kubelet[3310]: E0716 00:04:05.062357 3310 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"989f5cee808b4b996971a448f9a4d3f0223ab84957e41e3c7622f6891c543525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7596c76d8-gcdwn" Jul 16 00:04:05.062402 kubelet[3310]: E0716 00:04:05.062382 3310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7596c76d8-gcdwn_calico-system(89629daf-1089-4d68-899e-34a27da1a4f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7596c76d8-gcdwn_calico-system(89629daf-1089-4d68-899e-34a27da1a4f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"989f5cee808b4b996971a448f9a4d3f0223ab84957e41e3c7622f6891c543525\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7596c76d8-gcdwn" podUID="89629daf-1089-4d68-899e-34a27da1a4f4" Jul 16 00:04:05.534913 systemd[1]: Created slice kubepods-besteffort-podd952e491_a482_4114_a7aa_287f7c6c93c7.slice - libcontainer container kubepods-besteffort-podd952e491_a482_4114_a7aa_287f7c6c93c7.slice. Jul 16 00:04:05.541011 containerd[1964]: time="2025-07-16T00:04:05.540886466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lx85x,Uid:d952e491-a482-4114-a7aa-287f7c6c93c7,Namespace:calico-system,Attempt:0,}" Jul 16 00:04:05.565376 containerd[1964]: time="2025-07-16T00:04:05.565326757Z" level=error msg="Failed to destroy network for sandbox \"27de5203af78a17aa336dd6591aac27a1517d4922a8641e1bc1affc59923a8fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.565807 containerd[1964]: time="2025-07-16T00:04:05.565749414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lx85x,Uid:d952e491-a482-4114-a7aa-287f7c6c93c7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"27de5203af78a17aa336dd6591aac27a1517d4922a8641e1bc1affc59923a8fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.565910 kubelet[3310]: E0716 00:04:05.565858 3310 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27de5203af78a17aa336dd6591aac27a1517d4922a8641e1bc1affc59923a8fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 16 00:04:05.565910 kubelet[3310]: E0716 00:04:05.565889 3310 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27de5203af78a17aa336dd6591aac27a1517d4922a8641e1bc1affc59923a8fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lx85x" Jul 16 00:04:05.565910 kubelet[3310]: E0716 00:04:05.565904 3310 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27de5203af78a17aa336dd6591aac27a1517d4922a8641e1bc1affc59923a8fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lx85x" Jul 16 00:04:05.566014 kubelet[3310]: E0716 00:04:05.565929 3310 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lx85x_calico-system(d952e491-a482-4114-a7aa-287f7c6c93c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lx85x_calico-system(d952e491-a482-4114-a7aa-287f7c6c93c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27de5203af78a17aa336dd6591aac27a1517d4922a8641e1bc1affc59923a8fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lx85x" podUID="d952e491-a482-4114-a7aa-287f7c6c93c7" Jul 16 00:04:05.602145 containerd[1964]: time="2025-07-16T00:04:05.602055924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 16 00:04:05.974892 systemd[1]: run-netns-cni\x2d0aff3215\x2d9dcb\x2d8a87\x2da98b\x2dabc195407895.mount: Deactivated successfully. Jul 16 00:04:06.103123 kubelet[3310]: I0716 00:04:06.103053 3310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:04:08.822417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount423320978.mount: Deactivated successfully. Jul 16 00:04:08.843477 containerd[1964]: time="2025-07-16T00:04:08.843454961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:08.843737 containerd[1964]: time="2025-07-16T00:04:08.843711868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 16 00:04:08.844110 containerd[1964]: time="2025-07-16T00:04:08.844094607Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:08.844864 containerd[1964]: time="2025-07-16T00:04:08.844838984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:08.845171 containerd[1964]: time="2025-07-16T00:04:08.845156334Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 3.243014449s" Jul 16 00:04:08.845171 containerd[1964]: time="2025-07-16T00:04:08.845171764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 16 00:04:08.848540 containerd[1964]: time="2025-07-16T00:04:08.848524571Z" level=info msg="CreateContainer within sandbox \"a2aeb2cd09f84404b28852c92d466eb588ba300bb8665f30c8508debdc88052c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 16 00:04:08.852359 containerd[1964]: time="2025-07-16T00:04:08.852344619Z" level=info msg="Container d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:04:08.873952 containerd[1964]: time="2025-07-16T00:04:08.873902027Z" level=info msg="CreateContainer within sandbox \"a2aeb2cd09f84404b28852c92d466eb588ba300bb8665f30c8508debdc88052c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\"" Jul 16 00:04:08.874294 containerd[1964]: time="2025-07-16T00:04:08.874241325Z" level=info msg="StartContainer for \"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\"" Jul 16 00:04:08.875057 containerd[1964]: time="2025-07-16T00:04:08.875014202Z" level=info msg="connecting to shim d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601" address="unix:///run/containerd/s/5d9f101b91bd0faf658b8f7534f9e86647b9374265b84095376cebadfe538383" protocol=ttrpc version=3 Jul 16 00:04:08.889953 systemd[1]: Started cri-containerd-d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601.scope - libcontainer container d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601. Jul 16 00:04:08.915473 containerd[1964]: time="2025-07-16T00:04:08.915446773Z" level=info msg="StartContainer for \"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" returns successfully" Jul 16 00:04:08.979277 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 16 00:04:08.979338 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 16 00:04:09.206128 kubelet[3310]: I0716 00:04:09.206015 3310 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89629daf-1089-4d68-899e-34a27da1a4f4-whisker-ca-bundle\") pod \"89629daf-1089-4d68-899e-34a27da1a4f4\" (UID: \"89629daf-1089-4d68-899e-34a27da1a4f4\") " Jul 16 00:04:09.206891 kubelet[3310]: I0716 00:04:09.206139 3310 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knsqx\" (UniqueName: \"kubernetes.io/projected/89629daf-1089-4d68-899e-34a27da1a4f4-kube-api-access-knsqx\") pod \"89629daf-1089-4d68-899e-34a27da1a4f4\" (UID: \"89629daf-1089-4d68-899e-34a27da1a4f4\") " Jul 16 00:04:09.206891 kubelet[3310]: I0716 00:04:09.206225 3310 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/89629daf-1089-4d68-899e-34a27da1a4f4-whisker-backend-key-pair\") pod \"89629daf-1089-4d68-899e-34a27da1a4f4\" (UID: \"89629daf-1089-4d68-899e-34a27da1a4f4\") " Jul 16 00:04:09.206891 kubelet[3310]: I0716 00:04:09.206796 3310 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89629daf-1089-4d68-899e-34a27da1a4f4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "89629daf-1089-4d68-899e-34a27da1a4f4" (UID: "89629daf-1089-4d68-899e-34a27da1a4f4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 16 00:04:09.211898 kubelet[3310]: I0716 00:04:09.211799 3310 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89629daf-1089-4d68-899e-34a27da1a4f4-kube-api-access-knsqx" (OuterVolumeSpecName: "kube-api-access-knsqx") pod "89629daf-1089-4d68-899e-34a27da1a4f4" (UID: "89629daf-1089-4d68-899e-34a27da1a4f4"). InnerVolumeSpecName "kube-api-access-knsqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 16 00:04:09.211898 kubelet[3310]: I0716 00:04:09.211840 3310 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89629daf-1089-4d68-899e-34a27da1a4f4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "89629daf-1089-4d68-899e-34a27da1a4f4" (UID: "89629daf-1089-4d68-899e-34a27da1a4f4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 16 00:04:09.307569 kubelet[3310]: I0716 00:04:09.307453 3310 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89629daf-1089-4d68-899e-34a27da1a4f4-whisker-ca-bundle\") on node \"ci-4372.0.1-n-fdc39dabbd\" DevicePath \"\"" Jul 16 00:04:09.307569 kubelet[3310]: I0716 00:04:09.307524 3310 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knsqx\" (UniqueName: \"kubernetes.io/projected/89629daf-1089-4d68-899e-34a27da1a4f4-kube-api-access-knsqx\") on node \"ci-4372.0.1-n-fdc39dabbd\" DevicePath \"\"" Jul 16 00:04:09.307569 kubelet[3310]: I0716 00:04:09.307555 3310 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/89629daf-1089-4d68-899e-34a27da1a4f4-whisker-backend-key-pair\") on node \"ci-4372.0.1-n-fdc39dabbd\" DevicePath \"\"" Jul 16 00:04:09.532304 systemd[1]: Removed slice kubepods-besteffort-pod89629daf_1089_4d68_899e_34a27da1a4f4.slice - libcontainer container kubepods-besteffort-pod89629daf_1089_4d68_899e_34a27da1a4f4.slice. Jul 16 00:04:09.657613 kubelet[3310]: I0716 00:04:09.657504 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9nn5h" podStartSLOduration=1.452822074 podStartE2EDuration="12.657466372s" podCreationTimestamp="2025-07-16 00:03:57 +0000 UTC" firstStartedPulling="2025-07-16 00:03:57.640878304 +0000 UTC m=+16.165363991" lastFinishedPulling="2025-07-16 00:04:08.845522599 +0000 UTC m=+27.370008289" observedRunningTime="2025-07-16 00:04:09.65665598 +0000 UTC m=+28.181141754" watchObservedRunningTime="2025-07-16 00:04:09.657466372 +0000 UTC m=+28.181952125" Jul 16 00:04:09.728187 systemd[1]: Created slice kubepods-besteffort-podc32436c6_7f47_49b9_801b_a46352d4388d.slice - libcontainer container kubepods-besteffort-podc32436c6_7f47_49b9_801b_a46352d4388d.slice. Jul 16 00:04:09.811623 kubelet[3310]: I0716 00:04:09.811420 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c32436c6-7f47-49b9-801b-a46352d4388d-whisker-ca-bundle\") pod \"whisker-5f87c565b8-4cdrt\" (UID: \"c32436c6-7f47-49b9-801b-a46352d4388d\") " pod="calico-system/whisker-5f87c565b8-4cdrt" Jul 16 00:04:09.811623 kubelet[3310]: I0716 00:04:09.811562 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c32436c6-7f47-49b9-801b-a46352d4388d-whisker-backend-key-pair\") pod \"whisker-5f87c565b8-4cdrt\" (UID: \"c32436c6-7f47-49b9-801b-a46352d4388d\") " pod="calico-system/whisker-5f87c565b8-4cdrt" Jul 16 00:04:09.811965 kubelet[3310]: I0716 00:04:09.811694 3310 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klc86\" (UniqueName: \"kubernetes.io/projected/c32436c6-7f47-49b9-801b-a46352d4388d-kube-api-access-klc86\") pod \"whisker-5f87c565b8-4cdrt\" (UID: \"c32436c6-7f47-49b9-801b-a46352d4388d\") " pod="calico-system/whisker-5f87c565b8-4cdrt" Jul 16 00:04:09.829644 systemd[1]: var-lib-kubelet-pods-89629daf\x2d1089\x2d4d68\x2d899e\x2d34a27da1a4f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dknsqx.mount: Deactivated successfully. Jul 16 00:04:09.829926 systemd[1]: var-lib-kubelet-pods-89629daf\x2d1089\x2d4d68\x2d899e\x2d34a27da1a4f4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 16 00:04:10.032942 containerd[1964]: time="2025-07-16T00:04:10.032820072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f87c565b8-4cdrt,Uid:c32436c6-7f47-49b9-801b-a46352d4388d,Namespace:calico-system,Attempt:0,}" Jul 16 00:04:10.090226 systemd-networkd[1878]: cali174dc10d95f: Link UP Jul 16 00:04:10.090357 systemd-networkd[1878]: cali174dc10d95f: Gained carrier Jul 16 00:04:10.097284 containerd[1964]: 2025-07-16 00:04:10.045 [INFO][4717] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 16 00:04:10.097284 containerd[1964]: 2025-07-16 00:04:10.051 [INFO][4717] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--fdc39dabbd-k8s-whisker--5f87c565b8--4cdrt-eth0 whisker-5f87c565b8- calico-system c32436c6-7f47-49b9-801b-a46352d4388d 843 0 2025-07-16 00:04:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f87c565b8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4372.0.1-n-fdc39dabbd whisker-5f87c565b8-4cdrt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali174dc10d95f [] [] }} ContainerID="02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" Namespace="calico-system" Pod="whisker-5f87c565b8-4cdrt" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-whisker--5f87c565b8--4cdrt-" Jul 16 00:04:10.097284 containerd[1964]: 2025-07-16 00:04:10.051 [INFO][4717] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" Namespace="calico-system" Pod="whisker-5f87c565b8-4cdrt" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-whisker--5f87c565b8--4cdrt-eth0" Jul 16 00:04:10.097284 containerd[1964]: 2025-07-16 00:04:10.064 [INFO][4739] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" HandleID="k8s-pod-network.02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-whisker--5f87c565b8--4cdrt-eth0" Jul 16 00:04:10.097450 containerd[1964]: 2025-07-16 00:04:10.064 [INFO][4739] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" HandleID="k8s-pod-network.02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-whisker--5f87c565b8--4cdrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e3f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-fdc39dabbd", "pod":"whisker-5f87c565b8-4cdrt", "timestamp":"2025-07-16 00:04:10.064646364 +0000 UTC"}, Hostname:"ci-4372.0.1-n-fdc39dabbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:04:10.097450 containerd[1964]: 2025-07-16 00:04:10.064 [INFO][4739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:04:10.097450 containerd[1964]: 2025-07-16 00:04:10.064 [INFO][4739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:04:10.097450 containerd[1964]: 2025-07-16 00:04:10.064 [INFO][4739] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-fdc39dabbd' Jul 16 00:04:10.097450 containerd[1964]: 2025-07-16 00:04:10.068 [INFO][4739] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:10.097450 containerd[1964]: 2025-07-16 00:04:10.070 [INFO][4739] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:10.097450 containerd[1964]: 2025-07-16 00:04:10.072 [INFO][4739] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:10.097450 containerd[1964]: 2025-07-16 00:04:10.073 [INFO][4739] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:10.097450 containerd[1964]: 2025-07-16 00:04:10.074 [INFO][4739] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:10.097628 containerd[1964]: 2025-07-16 00:04:10.074 [INFO][4739] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:10.097628 containerd[1964]: 2025-07-16 00:04:10.075 [INFO][4739] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48 Jul 16 00:04:10.097628 containerd[1964]: 2025-07-16 00:04:10.077 [INFO][4739] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:10.097628 containerd[1964]: 2025-07-16 00:04:10.080 [INFO][4739] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.65/26] block=192.168.81.64/26 handle="k8s-pod-network.02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:10.097628 containerd[1964]: 2025-07-16 00:04:10.080 [INFO][4739] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.65/26] handle="k8s-pod-network.02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:10.097628 containerd[1964]: 2025-07-16 00:04:10.080 [INFO][4739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:04:10.097628 containerd[1964]: 2025-07-16 00:04:10.080 [INFO][4739] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.65/26] IPv6=[] ContainerID="02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" HandleID="k8s-pod-network.02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-whisker--5f87c565b8--4cdrt-eth0" Jul 16 00:04:10.097751 containerd[1964]: 2025-07-16 00:04:10.081 [INFO][4717] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" Namespace="calico-system" Pod="whisker-5f87c565b8-4cdrt" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-whisker--5f87c565b8--4cdrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-whisker--5f87c565b8--4cdrt-eth0", GenerateName:"whisker-5f87c565b8-", Namespace:"calico-system", SelfLink:"", UID:"c32436c6-7f47-49b9-801b-a46352d4388d", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 4, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f87c565b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"", Pod:"whisker-5f87c565b8-4cdrt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali174dc10d95f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:10.097751 containerd[1964]: 2025-07-16 00:04:10.081 [INFO][4717] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.65/32] ContainerID="02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" Namespace="calico-system" Pod="whisker-5f87c565b8-4cdrt" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-whisker--5f87c565b8--4cdrt-eth0" Jul 16 00:04:10.097826 containerd[1964]: 2025-07-16 00:04:10.081 [INFO][4717] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali174dc10d95f ContainerID="02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" Namespace="calico-system" Pod="whisker-5f87c565b8-4cdrt" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-whisker--5f87c565b8--4cdrt-eth0" Jul 16 00:04:10.097826 containerd[1964]: 2025-07-16 00:04:10.090 [INFO][4717] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" Namespace="calico-system" Pod="whisker-5f87c565b8-4cdrt" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-whisker--5f87c565b8--4cdrt-eth0" Jul 16 00:04:10.097863 containerd[1964]: 2025-07-16 00:04:10.090 [INFO][4717] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" Namespace="calico-system" Pod="whisker-5f87c565b8-4cdrt" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-whisker--5f87c565b8--4cdrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-whisker--5f87c565b8--4cdrt-eth0", GenerateName:"whisker-5f87c565b8-", Namespace:"calico-system", SelfLink:"", UID:"c32436c6-7f47-49b9-801b-a46352d4388d", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 4, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f87c565b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48", Pod:"whisker-5f87c565b8-4cdrt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.81.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali174dc10d95f", MAC:"02:57:e9:f3:55:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:10.097906 containerd[1964]: 2025-07-16 00:04:10.096 [INFO][4717] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" Namespace="calico-system" Pod="whisker-5f87c565b8-4cdrt" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-whisker--5f87c565b8--4cdrt-eth0" Jul 16 00:04:10.105627 containerd[1964]: time="2025-07-16T00:04:10.105602657Z" level=info msg="connecting to shim 02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48" address="unix:///run/containerd/s/640d87aa9a755ee13ce59c9394c478aae3bacfe9a7ee35a4ed903e498d4e8e82" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:04:10.126987 systemd[1]: Started cri-containerd-02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48.scope - libcontainer container 02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48. Jul 16 00:04:10.172220 containerd[1964]: time="2025-07-16T00:04:10.172189443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f87c565b8-4cdrt,Uid:c32436c6-7f47-49b9-801b-a46352d4388d,Namespace:calico-system,Attempt:0,} returns sandbox id \"02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48\"" Jul 16 00:04:10.173160 containerd[1964]: time="2025-07-16T00:04:10.173145312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 16 00:04:10.362985 systemd-networkd[1878]: vxlan.calico: Link UP Jul 16 00:04:10.362989 systemd-networkd[1878]: vxlan.calico: Gained carrier Jul 16 00:04:10.616825 kubelet[3310]: I0716 00:04:10.616779 3310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:04:11.518542 kubelet[3310]: I0716 00:04:11.518480 3310 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89629daf-1089-4d68-899e-34a27da1a4f4" path="/var/lib/kubelet/pods/89629daf-1089-4d68-899e-34a27da1a4f4/volumes" Jul 16 00:04:11.562786 containerd[1964]: time="2025-07-16T00:04:11.562738569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:11.563002 containerd[1964]: time="2025-07-16T00:04:11.562974559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 16 00:04:11.563307 containerd[1964]: time="2025-07-16T00:04:11.563272286Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:11.564160 containerd[1964]: time="2025-07-16T00:04:11.564107873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:11.564858 containerd[1964]: time="2025-07-16T00:04:11.564805779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.391583743s" Jul 16 00:04:11.564858 containerd[1964]: time="2025-07-16T00:04:11.564821443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 16 00:04:11.565640 containerd[1964]: time="2025-07-16T00:04:11.565630510Z" level=info msg="CreateContainer within sandbox \"02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 16 00:04:11.568365 containerd[1964]: time="2025-07-16T00:04:11.568329058Z" level=info msg="Container c3c25102955ce36aa9ba71684bddf27c41c660ddbab20be72055322fe1312be1: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:04:11.571208 containerd[1964]: time="2025-07-16T00:04:11.571166224Z" level=info msg="CreateContainer within sandbox \"02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c3c25102955ce36aa9ba71684bddf27c41c660ddbab20be72055322fe1312be1\"" Jul 16 00:04:11.571427 containerd[1964]: time="2025-07-16T00:04:11.571382625Z" level=info msg="StartContainer for \"c3c25102955ce36aa9ba71684bddf27c41c660ddbab20be72055322fe1312be1\"" Jul 16 00:04:11.571923 containerd[1964]: time="2025-07-16T00:04:11.571883817Z" level=info msg="connecting to shim c3c25102955ce36aa9ba71684bddf27c41c660ddbab20be72055322fe1312be1" address="unix:///run/containerd/s/640d87aa9a755ee13ce59c9394c478aae3bacfe9a7ee35a4ed903e498d4e8e82" protocol=ttrpc version=3 Jul 16 00:04:11.586072 systemd[1]: Started cri-containerd-c3c25102955ce36aa9ba71684bddf27c41c660ddbab20be72055322fe1312be1.scope - libcontainer container c3c25102955ce36aa9ba71684bddf27c41c660ddbab20be72055322fe1312be1. Jul 16 00:04:11.604918 systemd-networkd[1878]: cali174dc10d95f: Gained IPv6LL Jul 16 00:04:11.614096 containerd[1964]: time="2025-07-16T00:04:11.614074450Z" level=info msg="StartContainer for \"c3c25102955ce36aa9ba71684bddf27c41c660ddbab20be72055322fe1312be1\" returns successfully" Jul 16 00:04:11.614616 containerd[1964]: time="2025-07-16T00:04:11.614604799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 16 00:04:12.308095 systemd-networkd[1878]: vxlan.calico: Gained IPv6LL Jul 16 00:04:13.427374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196134600.mount: Deactivated successfully. Jul 16 00:04:13.431447 containerd[1964]: time="2025-07-16T00:04:13.431402481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:13.431625 containerd[1964]: time="2025-07-16T00:04:13.431611603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 16 00:04:13.431986 containerd[1964]: time="2025-07-16T00:04:13.431946624Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:13.432861 containerd[1964]: time="2025-07-16T00:04:13.432810370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:13.433199 containerd[1964]: time="2025-07-16T00:04:13.433178926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 1.818558602s" Jul 16 00:04:13.433199 containerd[1964]: time="2025-07-16T00:04:13.433193167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 16 00:04:13.434106 containerd[1964]: time="2025-07-16T00:04:13.434093722Z" level=info msg="CreateContainer within sandbox \"02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 16 00:04:13.436595 containerd[1964]: time="2025-07-16T00:04:13.436583117Z" level=info msg="Container 9766fa2683ff9aed6176b3dc2765fe6bd641cfeb107d4fb7dbf931662d0ebc10: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:04:13.439652 containerd[1964]: time="2025-07-16T00:04:13.439610038Z" level=info msg="CreateContainer within sandbox \"02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9766fa2683ff9aed6176b3dc2765fe6bd641cfeb107d4fb7dbf931662d0ebc10\"" Jul 16 00:04:13.439888 containerd[1964]: time="2025-07-16T00:04:13.439837860Z" level=info msg="StartContainer for \"9766fa2683ff9aed6176b3dc2765fe6bd641cfeb107d4fb7dbf931662d0ebc10\"" Jul 16 00:04:13.440375 containerd[1964]: time="2025-07-16T00:04:13.440335187Z" level=info msg="connecting to shim 9766fa2683ff9aed6176b3dc2765fe6bd641cfeb107d4fb7dbf931662d0ebc10" address="unix:///run/containerd/s/640d87aa9a755ee13ce59c9394c478aae3bacfe9a7ee35a4ed903e498d4e8e82" protocol=ttrpc version=3 Jul 16 00:04:13.457928 systemd[1]: Started cri-containerd-9766fa2683ff9aed6176b3dc2765fe6bd641cfeb107d4fb7dbf931662d0ebc10.scope - libcontainer container 9766fa2683ff9aed6176b3dc2765fe6bd641cfeb107d4fb7dbf931662d0ebc10. Jul 16 00:04:13.490276 containerd[1964]: time="2025-07-16T00:04:13.490222835Z" level=info msg="StartContainer for \"9766fa2683ff9aed6176b3dc2765fe6bd641cfeb107d4fb7dbf931662d0ebc10\" returns successfully" Jul 16 00:04:13.654450 kubelet[3310]: I0716 00:04:13.654317 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5f87c565b8-4cdrt" podStartSLOduration=1.393728519 podStartE2EDuration="4.654279641s" podCreationTimestamp="2025-07-16 00:04:09 +0000 UTC" firstStartedPulling="2025-07-16 00:04:10.172969053 +0000 UTC m=+28.697454741" lastFinishedPulling="2025-07-16 00:04:13.433520174 +0000 UTC m=+31.958005863" observedRunningTime="2025-07-16 00:04:13.653141023 +0000 UTC m=+32.177626782" watchObservedRunningTime="2025-07-16 00:04:13.654279641 +0000 UTC m=+32.178765382" Jul 16 00:04:13.817957 kubelet[3310]: I0716 00:04:13.817855 3310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:04:13.927997 containerd[1964]: time="2025-07-16T00:04:13.927972773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"1b7cf66be602115f0d4efa3bca469dd027ed64ba8ca8b7d0b4c5bb30b6d2c409\" pid:5199 exited_at:{seconds:1752624253 nanos:927734746}" Jul 16 00:04:13.973021 containerd[1964]: time="2025-07-16T00:04:13.972990960Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"72ef2c50703e5e4528823496189789263ca25a305122e93088fba325a2b0755f\" pid:5235 exited_at:{seconds:1752624253 nanos:972784993}" Jul 16 00:04:16.518219 containerd[1964]: time="2025-07-16T00:04:16.518090452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qtx46,Uid:1b936d2a-ef32-4746-8cca-3e2a3a8494c3,Namespace:calico-system,Attempt:0,}" Jul 16 00:04:16.629355 systemd-networkd[1878]: calie4f486f64b7: Link UP Jul 16 00:04:16.629588 systemd-networkd[1878]: calie4f486f64b7: Gained carrier Jul 16 00:04:16.635854 containerd[1964]: 2025-07-16 00:04:16.538 [INFO][5267] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--fdc39dabbd-k8s-goldmane--58fd7646b9--qtx46-eth0 goldmane-58fd7646b9- calico-system 1b936d2a-ef32-4746-8cca-3e2a3a8494c3 778 0 2025-07-16 00:03:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4372.0.1-n-fdc39dabbd goldmane-58fd7646b9-qtx46 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie4f486f64b7 [] [] }} ContainerID="6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" Namespace="calico-system" Pod="goldmane-58fd7646b9-qtx46" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-goldmane--58fd7646b9--qtx46-" Jul 16 00:04:16.635854 containerd[1964]: 2025-07-16 00:04:16.538 [INFO][5267] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" Namespace="calico-system" Pod="goldmane-58fd7646b9-qtx46" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-goldmane--58fd7646b9--qtx46-eth0" Jul 16 00:04:16.635854 containerd[1964]: 2025-07-16 00:04:16.551 [INFO][5287] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" HandleID="k8s-pod-network.6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-goldmane--58fd7646b9--qtx46-eth0" Jul 16 00:04:16.636008 containerd[1964]: 2025-07-16 00:04:16.551 [INFO][5287] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" HandleID="k8s-pod-network.6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-goldmane--58fd7646b9--qtx46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033fcc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-fdc39dabbd", "pod":"goldmane-58fd7646b9-qtx46", "timestamp":"2025-07-16 00:04:16.551837375 +0000 UTC"}, Hostname:"ci-4372.0.1-n-fdc39dabbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:04:16.636008 containerd[1964]: 2025-07-16 00:04:16.551 [INFO][5287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:04:16.636008 containerd[1964]: 2025-07-16 00:04:16.551 [INFO][5287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:04:16.636008 containerd[1964]: 2025-07-16 00:04:16.551 [INFO][5287] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-fdc39dabbd' Jul 16 00:04:16.636008 containerd[1964]: 2025-07-16 00:04:16.558 [INFO][5287] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:16.636008 containerd[1964]: 2025-07-16 00:04:16.574 [INFO][5287] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:16.636008 containerd[1964]: 2025-07-16 00:04:16.584 [INFO][5287] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:16.636008 containerd[1964]: 2025-07-16 00:04:16.588 [INFO][5287] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:16.636008 containerd[1964]: 2025-07-16 00:04:16.593 [INFO][5287] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:16.636173 containerd[1964]: 2025-07-16 00:04:16.593 [INFO][5287] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:16.636173 containerd[1964]: 2025-07-16 00:04:16.596 [INFO][5287] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea Jul 16 00:04:16.636173 containerd[1964]: 2025-07-16 00:04:16.605 [INFO][5287] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:16.636173 containerd[1964]: 2025-07-16 00:04:16.616 [INFO][5287] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.66/26] block=192.168.81.64/26 handle="k8s-pod-network.6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:16.636173 containerd[1964]: 2025-07-16 00:04:16.616 [INFO][5287] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.66/26] handle="k8s-pod-network.6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:16.636173 containerd[1964]: 2025-07-16 00:04:16.616 [INFO][5287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:04:16.636173 containerd[1964]: 2025-07-16 00:04:16.616 [INFO][5287] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.66/26] IPv6=[] ContainerID="6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" HandleID="k8s-pod-network.6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-goldmane--58fd7646b9--qtx46-eth0" Jul 16 00:04:16.636283 containerd[1964]: 2025-07-16 00:04:16.621 [INFO][5267] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" Namespace="calico-system" Pod="goldmane-58fd7646b9-qtx46" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-goldmane--58fd7646b9--qtx46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-goldmane--58fd7646b9--qtx46-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"1b936d2a-ef32-4746-8cca-3e2a3a8494c3", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"", Pod:"goldmane-58fd7646b9-qtx46", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie4f486f64b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:16.636283 containerd[1964]: 2025-07-16 00:04:16.622 [INFO][5267] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.66/32] ContainerID="6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" Namespace="calico-system" Pod="goldmane-58fd7646b9-qtx46" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-goldmane--58fd7646b9--qtx46-eth0" Jul 16 00:04:16.636381 containerd[1964]: 2025-07-16 00:04:16.622 [INFO][5267] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4f486f64b7 ContainerID="6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" Namespace="calico-system" Pod="goldmane-58fd7646b9-qtx46" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-goldmane--58fd7646b9--qtx46-eth0" Jul 16 00:04:16.636381 containerd[1964]: 2025-07-16 00:04:16.629 [INFO][5267] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" Namespace="calico-system" Pod="goldmane-58fd7646b9-qtx46" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-goldmane--58fd7646b9--qtx46-eth0" Jul 16 00:04:16.636424 containerd[1964]: 2025-07-16 00:04:16.630 [INFO][5267] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" Namespace="calico-system" Pod="goldmane-58fd7646b9-qtx46" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-goldmane--58fd7646b9--qtx46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-goldmane--58fd7646b9--qtx46-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"1b936d2a-ef32-4746-8cca-3e2a3a8494c3", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea", Pod:"goldmane-58fd7646b9-qtx46", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.81.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie4f486f64b7", MAC:"3a:83:5d:61:3d:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:16.636463 containerd[1964]: 2025-07-16 00:04:16.634 [INFO][5267] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" Namespace="calico-system" Pod="goldmane-58fd7646b9-qtx46" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-goldmane--58fd7646b9--qtx46-eth0" Jul 16 00:04:16.644380 containerd[1964]: time="2025-07-16T00:04:16.644344327Z" level=info msg="connecting to shim 6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea" address="unix:///run/containerd/s/b74539e1177e4a5bf14e19b520adedb84ef00615af0b0369e7178a078ac53c9e" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:04:16.666948 systemd[1]: Started cri-containerd-6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea.scope - libcontainer container 6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea. Jul 16 00:04:16.693430 containerd[1964]: time="2025-07-16T00:04:16.693408041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qtx46,Uid:1b936d2a-ef32-4746-8cca-3e2a3a8494c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea\"" Jul 16 00:04:16.694062 containerd[1964]: time="2025-07-16T00:04:16.694050226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 16 00:04:17.519389 containerd[1964]: time="2025-07-16T00:04:17.519256299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lx85x,Uid:d952e491-a482-4114-a7aa-287f7c6c93c7,Namespace:calico-system,Attempt:0,}" Jul 16 00:04:17.520228 containerd[1964]: time="2025-07-16T00:04:17.519297464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lj477,Uid:817f396a-92cf-42e2-b40f-0bd522186d23,Namespace:kube-system,Attempt:0,}" Jul 16 00:04:17.575920 systemd-networkd[1878]: cali32cc1e2fdad: Link UP Jul 16 00:04:17.576113 systemd-networkd[1878]: cali32cc1e2fdad: Gained carrier Jul 16 00:04:17.581520 containerd[1964]: 2025-07-16 00:04:17.540 [INFO][5368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--lj477-eth0 coredns-7c65d6cfc9- kube-system 817f396a-92cf-42e2-b40f-0bd522186d23 771 0 2025-07-16 00:03:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-n-fdc39dabbd coredns-7c65d6cfc9-lj477 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali32cc1e2fdad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lj477" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--lj477-" Jul 16 00:04:17.581520 containerd[1964]: 2025-07-16 00:04:17.540 [INFO][5368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lj477" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--lj477-eth0" Jul 16 00:04:17.581520 containerd[1964]: 2025-07-16 00:04:17.553 [INFO][5409] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" HandleID="k8s-pod-network.f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--lj477-eth0" Jul 16 00:04:17.581682 containerd[1964]: 2025-07-16 00:04:17.553 [INFO][5409] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" HandleID="k8s-pod-network.f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--lj477-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138810), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-n-fdc39dabbd", "pod":"coredns-7c65d6cfc9-lj477", "timestamp":"2025-07-16 00:04:17.553414813 +0000 UTC"}, Hostname:"ci-4372.0.1-n-fdc39dabbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:04:17.581682 containerd[1964]: 2025-07-16 00:04:17.553 [INFO][5409] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:04:17.581682 containerd[1964]: 2025-07-16 00:04:17.553 [INFO][5409] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:04:17.581682 containerd[1964]: 2025-07-16 00:04:17.553 [INFO][5409] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-fdc39dabbd' Jul 16 00:04:17.581682 containerd[1964]: 2025-07-16 00:04:17.558 [INFO][5409] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.581682 containerd[1964]: 2025-07-16 00:04:17.561 [INFO][5409] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.581682 containerd[1964]: 2025-07-16 00:04:17.564 [INFO][5409] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.581682 containerd[1964]: 2025-07-16 00:04:17.565 [INFO][5409] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.581682 containerd[1964]: 2025-07-16 00:04:17.567 [INFO][5409] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.581865 containerd[1964]: 2025-07-16 00:04:17.567 [INFO][5409] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.581865 containerd[1964]: 2025-07-16 00:04:17.568 [INFO][5409] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1 Jul 16 00:04:17.581865 containerd[1964]: 2025-07-16 00:04:17.571 [INFO][5409] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.581865 containerd[1964]: 2025-07-16 00:04:17.573 [INFO][5409] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.67/26] block=192.168.81.64/26 handle="k8s-pod-network.f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.581865 containerd[1964]: 2025-07-16 00:04:17.573 [INFO][5409] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.67/26] handle="k8s-pod-network.f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.581865 containerd[1964]: 2025-07-16 00:04:17.573 [INFO][5409] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:04:17.581865 containerd[1964]: 2025-07-16 00:04:17.573 [INFO][5409] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.67/26] IPv6=[] ContainerID="f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" HandleID="k8s-pod-network.f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--lj477-eth0" Jul 16 00:04:17.581983 containerd[1964]: 2025-07-16 00:04:17.574 [INFO][5368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lj477" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--lj477-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--lj477-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"817f396a-92cf-42e2-b40f-0bd522186d23", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 3, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"", Pod:"coredns-7c65d6cfc9-lj477", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32cc1e2fdad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:17.581983 containerd[1964]: 2025-07-16 00:04:17.575 [INFO][5368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.67/32] ContainerID="f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lj477" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--lj477-eth0" Jul 16 00:04:17.581983 containerd[1964]: 2025-07-16 00:04:17.575 [INFO][5368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32cc1e2fdad ContainerID="f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lj477" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--lj477-eth0" Jul 16 00:04:17.581983 containerd[1964]: 2025-07-16 00:04:17.576 [INFO][5368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lj477" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--lj477-eth0" Jul 16 00:04:17.581983 containerd[1964]: 2025-07-16 00:04:17.576 [INFO][5368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lj477" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--lj477-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--lj477-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"817f396a-92cf-42e2-b40f-0bd522186d23", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 3, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1", Pod:"coredns-7c65d6cfc9-lj477", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32cc1e2fdad", MAC:"52:f0:37:44:d3:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:17.581983 containerd[1964]: 2025-07-16 00:04:17.580 [INFO][5368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lj477" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--lj477-eth0" Jul 16 00:04:17.589564 containerd[1964]: time="2025-07-16T00:04:17.589529616Z" level=info msg="connecting to shim f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1" address="unix:///run/containerd/s/5a9cf07996c5e1fefdb6a319048a38792fc06398847adca8ab4d5c4306d431d9" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:04:17.609956 systemd[1]: Started cri-containerd-f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1.scope - libcontainer container f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1. Jul 16 00:04:17.637316 containerd[1964]: time="2025-07-16T00:04:17.637292247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lj477,Uid:817f396a-92cf-42e2-b40f-0bd522186d23,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1\"" Jul 16 00:04:17.638530 containerd[1964]: time="2025-07-16T00:04:17.638515366Z" level=info msg="CreateContainer within sandbox \"f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 16 00:04:17.641740 containerd[1964]: time="2025-07-16T00:04:17.641722502Z" level=info msg="Container 4842fa39c7e192e6e7bdc6e606e1ad7012592b85ca3c20f2076af9946b86223c: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:04:17.644088 containerd[1964]: time="2025-07-16T00:04:17.644067917Z" level=info msg="CreateContainer within sandbox \"f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4842fa39c7e192e6e7bdc6e606e1ad7012592b85ca3c20f2076af9946b86223c\"" Jul 16 00:04:17.644375 containerd[1964]: time="2025-07-16T00:04:17.644358274Z" level=info msg="StartContainer for \"4842fa39c7e192e6e7bdc6e606e1ad7012592b85ca3c20f2076af9946b86223c\"" Jul 16 00:04:17.644978 containerd[1964]: time="2025-07-16T00:04:17.644965046Z" level=info msg="connecting to shim 4842fa39c7e192e6e7bdc6e606e1ad7012592b85ca3c20f2076af9946b86223c" address="unix:///run/containerd/s/5a9cf07996c5e1fefdb6a319048a38792fc06398847adca8ab4d5c4306d431d9" protocol=ttrpc version=3 Jul 16 00:04:17.665103 systemd[1]: Started cri-containerd-4842fa39c7e192e6e7bdc6e606e1ad7012592b85ca3c20f2076af9946b86223c.scope - libcontainer container 4842fa39c7e192e6e7bdc6e606e1ad7012592b85ca3c20f2076af9946b86223c. Jul 16 00:04:17.679547 systemd-networkd[1878]: cali6a0bc295311: Link UP Jul 16 00:04:17.679791 systemd-networkd[1878]: cali6a0bc295311: Gained carrier Jul 16 00:04:17.684013 containerd[1964]: time="2025-07-16T00:04:17.683971834Z" level=info msg="StartContainer for \"4842fa39c7e192e6e7bdc6e606e1ad7012592b85ca3c20f2076af9946b86223c\" returns successfully" Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.539 [INFO][5362] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--fdc39dabbd-k8s-csi--node--driver--lx85x-eth0 csi-node-driver- calico-system d952e491-a482-4114-a7aa-287f7c6c93c7 667 0 2025-07-16 00:03:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4372.0.1-n-fdc39dabbd csi-node-driver-lx85x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6a0bc295311 [] [] }} ContainerID="b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" Namespace="calico-system" Pod="csi-node-driver-lx85x" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-csi--node--driver--lx85x-" Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.539 [INFO][5362] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" Namespace="calico-system" Pod="csi-node-driver-lx85x" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-csi--node--driver--lx85x-eth0" Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.553 [INFO][5411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" HandleID="k8s-pod-network.b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-csi--node--driver--lx85x-eth0" Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.553 [INFO][5411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" HandleID="k8s-pod-network.b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-csi--node--driver--lx85x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000538d00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-fdc39dabbd", "pod":"csi-node-driver-lx85x", "timestamp":"2025-07-16 00:04:17.553693506 +0000 UTC"}, Hostname:"ci-4372.0.1-n-fdc39dabbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.553 [INFO][5411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.573 [INFO][5411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.573 [INFO][5411] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-fdc39dabbd' Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.658 [INFO][5411] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.661 [INFO][5411] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.664 [INFO][5411] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.668 [INFO][5411] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.670 [INFO][5411] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.670 [INFO][5411] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.670 [INFO][5411] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9 Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.672 [INFO][5411] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.675 [INFO][5411] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.68/26] block=192.168.81.64/26 handle="k8s-pod-network.b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.675 [INFO][5411] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.68/26] handle="k8s-pod-network.b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.675 [INFO][5411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:04:17.687486 containerd[1964]: 2025-07-16 00:04:17.675 [INFO][5411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.68/26] IPv6=[] ContainerID="b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" HandleID="k8s-pod-network.b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-csi--node--driver--lx85x-eth0" Jul 16 00:04:17.688306 containerd[1964]: 2025-07-16 00:04:17.677 [INFO][5362] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" Namespace="calico-system" Pod="csi-node-driver-lx85x" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-csi--node--driver--lx85x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-csi--node--driver--lx85x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d952e491-a482-4114-a7aa-287f7c6c93c7", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 3, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"", Pod:"csi-node-driver-lx85x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6a0bc295311", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:17.688306 containerd[1964]: 2025-07-16 00:04:17.677 [INFO][5362] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.68/32] ContainerID="b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" Namespace="calico-system" Pod="csi-node-driver-lx85x" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-csi--node--driver--lx85x-eth0" Jul 16 00:04:17.688306 containerd[1964]: 2025-07-16 00:04:17.677 [INFO][5362] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a0bc295311 ContainerID="b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" Namespace="calico-system" Pod="csi-node-driver-lx85x" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-csi--node--driver--lx85x-eth0" Jul 16 00:04:17.688306 containerd[1964]: 2025-07-16 00:04:17.679 [INFO][5362] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" Namespace="calico-system" Pod="csi-node-driver-lx85x" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-csi--node--driver--lx85x-eth0" Jul 16 00:04:17.688306 containerd[1964]: 2025-07-16 00:04:17.680 [INFO][5362] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" Namespace="calico-system" Pod="csi-node-driver-lx85x" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-csi--node--driver--lx85x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-csi--node--driver--lx85x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d952e491-a482-4114-a7aa-287f7c6c93c7", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 3, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9", Pod:"csi-node-driver-lx85x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.81.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6a0bc295311", MAC:"ba:56:0d:d8:66:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:17.688306 containerd[1964]: 2025-07-16 00:04:17.685 [INFO][5362] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" Namespace="calico-system" Pod="csi-node-driver-lx85x" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-csi--node--driver--lx85x-eth0" Jul 16 00:04:17.698448 containerd[1964]: time="2025-07-16T00:04:17.698411877Z" level=info msg="connecting to shim b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9" address="unix:///run/containerd/s/e7ed3ea7684e6e1c3c4ed3173ff13dd13fdda039aea04b746c9871028b4869e0" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:04:17.718960 systemd[1]: Started cri-containerd-b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9.scope - libcontainer container b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9. Jul 16 00:04:17.730532 containerd[1964]: time="2025-07-16T00:04:17.730510293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lx85x,Uid:d952e491-a482-4114-a7aa-287f7c6c93c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9\"" Jul 16 00:04:17.747955 systemd-networkd[1878]: calie4f486f64b7: Gained IPv6LL Jul 16 00:04:18.517360 containerd[1964]: time="2025-07-16T00:04:18.517302494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7778486458-86hwh,Uid:f2a60695-17f2-48fd-96de-335e0509a3da,Namespace:calico-system,Attempt:0,}" Jul 16 00:04:18.588738 systemd-networkd[1878]: cali37edefd4207: Link UP Jul 16 00:04:18.588917 systemd-networkd[1878]: cali37edefd4207: Gained carrier Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.539 [INFO][5602] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--fdc39dabbd-k8s-calico--kube--controllers--7778486458--86hwh-eth0 calico-kube-controllers-7778486458- calico-system f2a60695-17f2-48fd-96de-335e0509a3da 780 0 2025-07-16 00:03:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7778486458 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4372.0.1-n-fdc39dabbd calico-kube-controllers-7778486458-86hwh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali37edefd4207 [] [] }} ContainerID="c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" Namespace="calico-system" Pod="calico-kube-controllers-7778486458-86hwh" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--kube--controllers--7778486458--86hwh-" Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.539 [INFO][5602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" Namespace="calico-system" Pod="calico-kube-controllers-7778486458-86hwh" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--kube--controllers--7778486458--86hwh-eth0" Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.553 [INFO][5626] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" HandleID="k8s-pod-network.c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-calico--kube--controllers--7778486458--86hwh-eth0" Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.553 [INFO][5626] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" HandleID="k8s-pod-network.c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-calico--kube--controllers--7778486458--86hwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cf7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4372.0.1-n-fdc39dabbd", "pod":"calico-kube-controllers-7778486458-86hwh", "timestamp":"2025-07-16 00:04:18.553296973 +0000 UTC"}, Hostname:"ci-4372.0.1-n-fdc39dabbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.553 [INFO][5626] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.553 [INFO][5626] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.553 [INFO][5626] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-fdc39dabbd' Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.558 [INFO][5626] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.563 [INFO][5626] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.576 [INFO][5626] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.578 [INFO][5626] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.579 [INFO][5626] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.579 [INFO][5626] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.580 [INFO][5626] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.583 [INFO][5626] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.586 [INFO][5626] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.69/26] block=192.168.81.64/26 handle="k8s-pod-network.c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.586 [INFO][5626] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.69/26] handle="k8s-pod-network.c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.586 [INFO][5626] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:04:18.595219 containerd[1964]: 2025-07-16 00:04:18.586 [INFO][5626] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.69/26] IPv6=[] ContainerID="c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" HandleID="k8s-pod-network.c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-calico--kube--controllers--7778486458--86hwh-eth0" Jul 16 00:04:18.595817 containerd[1964]: 2025-07-16 00:04:18.587 [INFO][5602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" Namespace="calico-system" Pod="calico-kube-controllers-7778486458-86hwh" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--kube--controllers--7778486458--86hwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-calico--kube--controllers--7778486458--86hwh-eth0", GenerateName:"calico-kube-controllers-7778486458-", Namespace:"calico-system", SelfLink:"", UID:"f2a60695-17f2-48fd-96de-335e0509a3da", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 3, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7778486458", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"", Pod:"calico-kube-controllers-7778486458-86hwh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37edefd4207", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:18.595817 containerd[1964]: 2025-07-16 00:04:18.587 [INFO][5602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.69/32] ContainerID="c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" Namespace="calico-system" Pod="calico-kube-controllers-7778486458-86hwh" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--kube--controllers--7778486458--86hwh-eth0" Jul 16 00:04:18.595817 containerd[1964]: 2025-07-16 00:04:18.587 [INFO][5602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37edefd4207 ContainerID="c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" Namespace="calico-system" Pod="calico-kube-controllers-7778486458-86hwh" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--kube--controllers--7778486458--86hwh-eth0" Jul 16 00:04:18.595817 containerd[1964]: 2025-07-16 00:04:18.589 [INFO][5602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" Namespace="calico-system" Pod="calico-kube-controllers-7778486458-86hwh" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--kube--controllers--7778486458--86hwh-eth0" Jul 16 00:04:18.595817 containerd[1964]: 2025-07-16 00:04:18.589 [INFO][5602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" Namespace="calico-system" Pod="calico-kube-controllers-7778486458-86hwh" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--kube--controllers--7778486458--86hwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-calico--kube--controllers--7778486458--86hwh-eth0", GenerateName:"calico-kube-controllers-7778486458-", Namespace:"calico-system", SelfLink:"", UID:"f2a60695-17f2-48fd-96de-335e0509a3da", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 3, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7778486458", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b", Pod:"calico-kube-controllers-7778486458-86hwh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.81.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37edefd4207", MAC:"da:cb:e3:1c:4e:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:18.595817 containerd[1964]: 2025-07-16 00:04:18.593 [INFO][5602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" Namespace="calico-system" Pod="calico-kube-controllers-7778486458-86hwh" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--kube--controllers--7778486458--86hwh-eth0" Jul 16 00:04:18.603361 containerd[1964]: time="2025-07-16T00:04:18.603331573Z" level=info msg="connecting to shim c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b" address="unix:///run/containerd/s/88dee8611d5f9c485adef53b102bca50240ea678a4804bfddc5c851dc1047204" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:04:18.622909 systemd[1]: Started cri-containerd-c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b.scope - libcontainer container c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b. Jul 16 00:04:18.625299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3613277195.mount: Deactivated successfully. Jul 16 00:04:18.648669 kubelet[3310]: I0716 00:04:18.648584 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lj477" podStartSLOduration=31.6485428 podStartE2EDuration="31.6485428s" podCreationTimestamp="2025-07-16 00:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:04:18.648256696 +0000 UTC m=+37.172742389" watchObservedRunningTime="2025-07-16 00:04:18.6485428 +0000 UTC m=+37.173028487" Jul 16 00:04:18.650837 containerd[1964]: time="2025-07-16T00:04:18.650815831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7778486458-86hwh,Uid:f2a60695-17f2-48fd-96de-335e0509a3da,Namespace:calico-system,Attempt:0,} returns sandbox id \"c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b\"" Jul 16 00:04:18.834964 containerd[1964]: time="2025-07-16T00:04:18.834878777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:18.835134 containerd[1964]: time="2025-07-16T00:04:18.835096190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 16 00:04:18.835504 containerd[1964]: time="2025-07-16T00:04:18.835458980Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:18.836486 containerd[1964]: time="2025-07-16T00:04:18.836446345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:18.836894 containerd[1964]: time="2025-07-16T00:04:18.836850100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 2.142784045s" Jul 16 00:04:18.836894 containerd[1964]: time="2025-07-16T00:04:18.836865070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 16 00:04:18.837330 containerd[1964]: time="2025-07-16T00:04:18.837318377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 16 00:04:18.837840 containerd[1964]: time="2025-07-16T00:04:18.837829200Z" level=info msg="CreateContainer within sandbox \"6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 16 00:04:18.840932 containerd[1964]: time="2025-07-16T00:04:18.840911104Z" level=info msg="Container 374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:04:18.843622 containerd[1964]: time="2025-07-16T00:04:18.843582436Z" level=info msg="CreateContainer within sandbox \"6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\"" Jul 16 00:04:18.843842 containerd[1964]: time="2025-07-16T00:04:18.843829001Z" level=info msg="StartContainer for \"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\"" Jul 16 00:04:18.844392 containerd[1964]: time="2025-07-16T00:04:18.844378443Z" level=info msg="connecting to shim 374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9" address="unix:///run/containerd/s/b74539e1177e4a5bf14e19b520adedb84ef00615af0b0369e7178a078ac53c9e" protocol=ttrpc version=3 Jul 16 00:04:18.859909 systemd[1]: Started cri-containerd-374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9.scope - libcontainer container 374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9. Jul 16 00:04:18.888094 containerd[1964]: time="2025-07-16T00:04:18.888040707Z" level=info msg="StartContainer for \"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" returns successfully" Jul 16 00:04:18.899904 systemd-networkd[1878]: cali32cc1e2fdad: Gained IPv6LL Jul 16 00:04:18.900223 systemd-networkd[1878]: cali6a0bc295311: Gained IPv6LL Jul 16 00:04:19.518505 containerd[1964]: time="2025-07-16T00:04:19.518379578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vbg4v,Uid:69c1261e-5c9d-45dd-bb1d-036a7ac8a303,Namespace:kube-system,Attempt:0,}" Jul 16 00:04:19.518802 containerd[1964]: time="2025-07-16T00:04:19.518378988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b875fcc5-pv79p,Uid:3cad4247-8f73-4bc9-ba43-3b74b00ae4de,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:04:19.586845 systemd-networkd[1878]: calic65c4ddd832: Link UP Jul 16 00:04:19.587759 systemd-networkd[1878]: calic65c4ddd832: Gained carrier Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.549 [INFO][5758] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--vbg4v-eth0 coredns-7c65d6cfc9- kube-system 69c1261e-5c9d-45dd-bb1d-036a7ac8a303 779 0 2025-07-16 00:03:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4372.0.1-n-fdc39dabbd coredns-7c65d6cfc9-vbg4v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic65c4ddd832 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbg4v" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--vbg4v-" Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.549 [INFO][5758] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbg4v" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--vbg4v-eth0" Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.564 [INFO][5801] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" HandleID="k8s-pod-network.1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--vbg4v-eth0" Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.564 [INFO][5801] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" HandleID="k8s-pod-network.1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--vbg4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043b2e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4372.0.1-n-fdc39dabbd", "pod":"coredns-7c65d6cfc9-vbg4v", "timestamp":"2025-07-16 00:04:19.564388825 +0000 UTC"}, Hostname:"ci-4372.0.1-n-fdc39dabbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.564 [INFO][5801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.564 [INFO][5801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.564 [INFO][5801] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-fdc39dabbd' Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.569 [INFO][5801] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.572 [INFO][5801] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.575 [INFO][5801] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.576 [INFO][5801] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.578 [INFO][5801] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.578 [INFO][5801] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.579 [INFO][5801] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1 Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.582 [INFO][5801] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.584 [INFO][5801] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.70/26] block=192.168.81.64/26 handle="k8s-pod-network.1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.584 [INFO][5801] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.70/26] handle="k8s-pod-network.1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.584 [INFO][5801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:04:19.594540 containerd[1964]: 2025-07-16 00:04:19.584 [INFO][5801] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.70/26] IPv6=[] ContainerID="1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" HandleID="k8s-pod-network.1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--vbg4v-eth0" Jul 16 00:04:19.594990 containerd[1964]: 2025-07-16 00:04:19.585 [INFO][5758] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbg4v" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--vbg4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--vbg4v-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"69c1261e-5c9d-45dd-bb1d-036a7ac8a303", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 3, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"", Pod:"coredns-7c65d6cfc9-vbg4v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic65c4ddd832", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:19.594990 containerd[1964]: 2025-07-16 00:04:19.585 [INFO][5758] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.70/32] ContainerID="1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbg4v" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--vbg4v-eth0" Jul 16 00:04:19.594990 containerd[1964]: 2025-07-16 00:04:19.586 [INFO][5758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic65c4ddd832 ContainerID="1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbg4v" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--vbg4v-eth0" Jul 16 00:04:19.594990 containerd[1964]: 2025-07-16 00:04:19.587 [INFO][5758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbg4v" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--vbg4v-eth0" Jul 16 00:04:19.594990 containerd[1964]: 2025-07-16 00:04:19.588 [INFO][5758] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbg4v" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--vbg4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--vbg4v-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"69c1261e-5c9d-45dd-bb1d-036a7ac8a303", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 3, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1", Pod:"coredns-7c65d6cfc9-vbg4v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.81.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic65c4ddd832", MAC:"3a:0c:bd:15:ea:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:19.594990 containerd[1964]: 2025-07-16 00:04:19.593 [INFO][5758] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vbg4v" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-coredns--7c65d6cfc9--vbg4v-eth0" Jul 16 00:04:19.605005 containerd[1964]: time="2025-07-16T00:04:19.604966526Z" level=info msg="connecting to shim 1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1" address="unix:///run/containerd/s/d5aae7f708583e1e4ecae8ce5a3eeb1d2f04a54594e5fb91b1684b258aab3ed2" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:04:19.632237 systemd[1]: Started cri-containerd-1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1.scope - libcontainer container 1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1. Jul 16 00:04:19.671887 kubelet[3310]: I0716 00:04:19.671797 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-qtx46" podStartSLOduration=21.528400846 podStartE2EDuration="23.671735063s" podCreationTimestamp="2025-07-16 00:03:56 +0000 UTC" firstStartedPulling="2025-07-16 00:04:16.693933608 +0000 UTC m=+35.218419295" lastFinishedPulling="2025-07-16 00:04:18.837267824 +0000 UTC m=+37.361753512" observedRunningTime="2025-07-16 00:04:19.671013571 +0000 UTC m=+38.195499319" watchObservedRunningTime="2025-07-16 00:04:19.671735063 +0000 UTC m=+38.196220782" Jul 16 00:04:19.699243 systemd-networkd[1878]: calic892af07adc: Link UP Jul 16 00:04:19.699466 systemd-networkd[1878]: calic892af07adc: Gained carrier Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.554 [INFO][5768] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--pv79p-eth0 calico-apiserver-74b875fcc5- calico-apiserver 3cad4247-8f73-4bc9-ba43-3b74b00ae4de 777 0 2025-07-16 00:03:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74b875fcc5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-n-fdc39dabbd calico-apiserver-74b875fcc5-pv79p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic892af07adc [] [] }} ContainerID="e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-pv79p" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--pv79p-" Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.554 [INFO][5768] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-pv79p" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--pv79p-eth0" Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.569 [INFO][5813] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" HandleID="k8s-pod-network.e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--pv79p-eth0" Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.569 [INFO][5813] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" HandleID="k8s-pod-network.e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--pv79p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e6180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-n-fdc39dabbd", "pod":"calico-apiserver-74b875fcc5-pv79p", "timestamp":"2025-07-16 00:04:19.569312165 +0000 UTC"}, Hostname:"ci-4372.0.1-n-fdc39dabbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.569 [INFO][5813] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.584 [INFO][5813] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.584 [INFO][5813] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-fdc39dabbd' Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.670 [INFO][5813] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.677 [INFO][5813] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.681 [INFO][5813] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.683 [INFO][5813] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.685 [INFO][5813] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.685 [INFO][5813] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.687 [INFO][5813] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.690 [INFO][5813] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.695 [INFO][5813] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.71/26] block=192.168.81.64/26 handle="k8s-pod-network.e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.695 [INFO][5813] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.71/26] handle="k8s-pod-network.e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.695 [INFO][5813] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:04:19.707151 containerd[1964]: 2025-07-16 00:04:19.695 [INFO][5813] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.71/26] IPv6=[] ContainerID="e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" HandleID="k8s-pod-network.e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--pv79p-eth0" Jul 16 00:04:19.707647 containerd[1964]: 2025-07-16 00:04:19.697 [INFO][5768] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-pv79p" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--pv79p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--pv79p-eth0", GenerateName:"calico-apiserver-74b875fcc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cad4247-8f73-4bc9-ba43-3b74b00ae4de", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 3, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b875fcc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"", Pod:"calico-apiserver-74b875fcc5-pv79p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic892af07adc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:19.707647 containerd[1964]: 2025-07-16 00:04:19.697 [INFO][5768] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.71/32] ContainerID="e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-pv79p" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--pv79p-eth0" Jul 16 00:04:19.707647 containerd[1964]: 2025-07-16 00:04:19.697 [INFO][5768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic892af07adc ContainerID="e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-pv79p" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--pv79p-eth0" Jul 16 00:04:19.707647 containerd[1964]: 2025-07-16 00:04:19.699 [INFO][5768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-pv79p" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--pv79p-eth0" Jul 16 00:04:19.707647 containerd[1964]: 2025-07-16 00:04:19.699 [INFO][5768] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-pv79p" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--pv79p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--pv79p-eth0", GenerateName:"calico-apiserver-74b875fcc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cad4247-8f73-4bc9-ba43-3b74b00ae4de", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 3, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b875fcc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f", Pod:"calico-apiserver-74b875fcc5-pv79p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic892af07adc", MAC:"d6:d0:e6:29:22:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:19.707647 containerd[1964]: 2025-07-16 00:04:19.705 [INFO][5768] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-pv79p" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--pv79p-eth0" Jul 16 00:04:19.711905 containerd[1964]: time="2025-07-16T00:04:19.711884186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vbg4v,Uid:69c1261e-5c9d-45dd-bb1d-036a7ac8a303,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1\"" Jul 16 00:04:19.713130 containerd[1964]: time="2025-07-16T00:04:19.713112761Z" level=info msg="CreateContainer within sandbox \"1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 16 00:04:19.716739 containerd[1964]: time="2025-07-16T00:04:19.716703378Z" level=info msg="connecting to shim e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f" address="unix:///run/containerd/s/82461aa794aa463a867afb4563435228cecd872707b82e9bd0834867e511a5b5" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:04:19.716868 containerd[1964]: time="2025-07-16T00:04:19.716787307Z" level=info msg="Container 9fa115f2a2e571b7378565917bb1a34f6f510136c53e7dcb40d83381919a3ec1: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:04:19.719078 containerd[1964]: time="2025-07-16T00:04:19.719061685Z" level=info msg="CreateContainer within sandbox \"1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9fa115f2a2e571b7378565917bb1a34f6f510136c53e7dcb40d83381919a3ec1\"" Jul 16 00:04:19.719377 containerd[1964]: time="2025-07-16T00:04:19.719360145Z" level=info msg="StartContainer for \"9fa115f2a2e571b7378565917bb1a34f6f510136c53e7dcb40d83381919a3ec1\"" Jul 16 00:04:19.719853 containerd[1964]: time="2025-07-16T00:04:19.719839472Z" level=info msg="connecting to shim 9fa115f2a2e571b7378565917bb1a34f6f510136c53e7dcb40d83381919a3ec1" address="unix:///run/containerd/s/d5aae7f708583e1e4ecae8ce5a3eeb1d2f04a54594e5fb91b1684b258aab3ed2" protocol=ttrpc version=3 Jul 16 00:04:19.739096 systemd[1]: Started cri-containerd-9fa115f2a2e571b7378565917bb1a34f6f510136c53e7dcb40d83381919a3ec1.scope - libcontainer container 9fa115f2a2e571b7378565917bb1a34f6f510136c53e7dcb40d83381919a3ec1. Jul 16 00:04:19.739798 systemd[1]: Started cri-containerd-e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f.scope - libcontainer container e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f. Jul 16 00:04:19.753637 containerd[1964]: time="2025-07-16T00:04:19.753612797Z" level=info msg="StartContainer for \"9fa115f2a2e571b7378565917bb1a34f6f510136c53e7dcb40d83381919a3ec1\" returns successfully" Jul 16 00:04:19.768618 containerd[1964]: time="2025-07-16T00:04:19.768544372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b875fcc5-pv79p,Uid:3cad4247-8f73-4bc9-ba43-3b74b00ae4de,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f\"" Jul 16 00:04:19.924105 systemd-networkd[1878]: cali37edefd4207: Gained IPv6LL Jul 16 00:04:20.290202 containerd[1964]: time="2025-07-16T00:04:20.290176062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:20.290368 containerd[1964]: time="2025-07-16T00:04:20.290353397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 16 00:04:20.290724 containerd[1964]: time="2025-07-16T00:04:20.290687540Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:20.292862 containerd[1964]: time="2025-07-16T00:04:20.292814710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:20.293508 containerd[1964]: time="2025-07-16T00:04:20.293466264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.45613032s" Jul 16 00:04:20.293508 containerd[1964]: time="2025-07-16T00:04:20.293482764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 16 00:04:20.293949 containerd[1964]: time="2025-07-16T00:04:20.293910818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 16 00:04:20.294446 containerd[1964]: time="2025-07-16T00:04:20.294432136Z" level=info msg="CreateContainer within sandbox \"b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 16 00:04:20.298282 containerd[1964]: time="2025-07-16T00:04:20.298245962Z" level=info msg="Container 113cd57ef479c9102434a8ab844556ea156999fd55713e95bf711d6b62e3e321: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:04:20.301144 containerd[1964]: time="2025-07-16T00:04:20.301131197Z" level=info msg="CreateContainer within sandbox \"b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"113cd57ef479c9102434a8ab844556ea156999fd55713e95bf711d6b62e3e321\"" Jul 16 00:04:20.301393 containerd[1964]: time="2025-07-16T00:04:20.301378557Z" level=info msg="StartContainer for \"113cd57ef479c9102434a8ab844556ea156999fd55713e95bf711d6b62e3e321\"" Jul 16 00:04:20.302172 containerd[1964]: time="2025-07-16T00:04:20.302159371Z" level=info msg="connecting to shim 113cd57ef479c9102434a8ab844556ea156999fd55713e95bf711d6b62e3e321" address="unix:///run/containerd/s/e7ed3ea7684e6e1c3c4ed3173ff13dd13fdda039aea04b746c9871028b4869e0" protocol=ttrpc version=3 Jul 16 00:04:20.322938 systemd[1]: Started cri-containerd-113cd57ef479c9102434a8ab844556ea156999fd55713e95bf711d6b62e3e321.scope - libcontainer container 113cd57ef479c9102434a8ab844556ea156999fd55713e95bf711d6b62e3e321. Jul 16 00:04:20.343146 containerd[1964]: time="2025-07-16T00:04:20.343118541Z" level=info msg="StartContainer for \"113cd57ef479c9102434a8ab844556ea156999fd55713e95bf711d6b62e3e321\" returns successfully" Jul 16 00:04:20.518740 containerd[1964]: time="2025-07-16T00:04:20.518616588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b875fcc5-9xcms,Uid:ee786be9-80e5-49c5-8020-5ce0d112fde3,Namespace:calico-apiserver,Attempt:0,}" Jul 16 00:04:20.569929 systemd-networkd[1878]: cali0c150315a34: Link UP Jul 16 00:04:20.570052 systemd-networkd[1878]: cali0c150315a34: Gained carrier Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.537 [INFO][6028] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--9xcms-eth0 calico-apiserver-74b875fcc5- calico-apiserver ee786be9-80e5-49c5-8020-5ce0d112fde3 775 0 2025-07-16 00:03:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74b875fcc5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4372.0.1-n-fdc39dabbd calico-apiserver-74b875fcc5-9xcms eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0c150315a34 [] [] }} ContainerID="2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-9xcms" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--9xcms-" Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.537 [INFO][6028] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-9xcms" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--9xcms-eth0" Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.548 [INFO][6050] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" HandleID="k8s-pod-network.2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--9xcms-eth0" Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.548 [INFO][6050] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" HandleID="k8s-pod-network.2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--9xcms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4372.0.1-n-fdc39dabbd", "pod":"calico-apiserver-74b875fcc5-9xcms", "timestamp":"2025-07-16 00:04:20.548780017 +0000 UTC"}, Hostname:"ci-4372.0.1-n-fdc39dabbd", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.548 [INFO][6050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.548 [INFO][6050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.548 [INFO][6050] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4372.0.1-n-fdc39dabbd' Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.553 [INFO][6050] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.555 [INFO][6050] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.558 [INFO][6050] ipam/ipam.go 511: Trying affinity for 192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.559 [INFO][6050] ipam/ipam.go 158: Attempting to load block cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.560 [INFO][6050] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.81.64/26 host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.560 [INFO][6050] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.81.64/26 handle="k8s-pod-network.2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.561 [INFO][6050] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935 Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.564 [INFO][6050] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.81.64/26 handle="k8s-pod-network.2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.567 [INFO][6050] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.81.72/26] block=192.168.81.64/26 handle="k8s-pod-network.2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.567 [INFO][6050] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.81.72/26] handle="k8s-pod-network.2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" host="ci-4372.0.1-n-fdc39dabbd" Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.567 [INFO][6050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 16 00:04:20.575124 containerd[1964]: 2025-07-16 00:04:20.568 [INFO][6050] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.81.72/26] IPv6=[] ContainerID="2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" HandleID="k8s-pod-network.2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" Workload="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--9xcms-eth0" Jul 16 00:04:20.575503 containerd[1964]: 2025-07-16 00:04:20.568 [INFO][6028] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-9xcms" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--9xcms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--9xcms-eth0", GenerateName:"calico-apiserver-74b875fcc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee786be9-80e5-49c5-8020-5ce0d112fde3", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 3, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b875fcc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"", Pod:"calico-apiserver-74b875fcc5-9xcms", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0c150315a34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:20.575503 containerd[1964]: 2025-07-16 00:04:20.569 [INFO][6028] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.81.72/32] ContainerID="2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-9xcms" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--9xcms-eth0" Jul 16 00:04:20.575503 containerd[1964]: 2025-07-16 00:04:20.569 [INFO][6028] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c150315a34 ContainerID="2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-9xcms" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--9xcms-eth0" Jul 16 00:04:20.575503 containerd[1964]: 2025-07-16 00:04:20.570 [INFO][6028] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-9xcms" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--9xcms-eth0" Jul 16 00:04:20.575503 containerd[1964]: 2025-07-16 00:04:20.570 [INFO][6028] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-9xcms" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--9xcms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--9xcms-eth0", GenerateName:"calico-apiserver-74b875fcc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee786be9-80e5-49c5-8020-5ce0d112fde3", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.July, 16, 0, 3, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74b875fcc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4372.0.1-n-fdc39dabbd", ContainerID:"2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935", Pod:"calico-apiserver-74b875fcc5-9xcms", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.81.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0c150315a34", MAC:"a2:92:95:b9:47:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 16 00:04:20.575503 containerd[1964]: 2025-07-16 00:04:20.574 [INFO][6028] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" Namespace="calico-apiserver" Pod="calico-apiserver-74b875fcc5-9xcms" WorkloadEndpoint="ci--4372.0.1--n--fdc39dabbd-k8s-calico--apiserver--74b875fcc5--9xcms-eth0" Jul 16 00:04:20.583005 containerd[1964]: time="2025-07-16T00:04:20.582947994Z" level=info msg="connecting to shim 2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935" address="unix:///run/containerd/s/b6c0da203fcafe6de7cf334e31c5454a6cb1b0da6a891be4ff88356986a5acdd" namespace=k8s.io protocol=ttrpc version=3 Jul 16 00:04:20.598956 systemd[1]: Started cri-containerd-2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935.scope - libcontainer container 2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935. Jul 16 00:04:20.631158 containerd[1964]: time="2025-07-16T00:04:20.631094684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74b875fcc5-9xcms,Uid:ee786be9-80e5-49c5-8020-5ce0d112fde3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935\"" Jul 16 00:04:20.668134 kubelet[3310]: I0716 00:04:20.668067 3310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:04:20.690475 kubelet[3310]: I0716 00:04:20.690400 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vbg4v" podStartSLOduration=33.690372147 podStartE2EDuration="33.690372147s" podCreationTimestamp="2025-07-16 00:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-16 00:04:20.689634055 +0000 UTC m=+39.214119782" watchObservedRunningTime="2025-07-16 00:04:20.690372147 +0000 UTC m=+39.214857860" Jul 16 00:04:21.076062 systemd-networkd[1878]: calic65c4ddd832: Gained IPv6LL Jul 16 00:04:21.332147 systemd-networkd[1878]: calic892af07adc: Gained IPv6LL Jul 16 00:04:22.239480 containerd[1964]: time="2025-07-16T00:04:22.239426513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:22.239736 containerd[1964]: time="2025-07-16T00:04:22.239629634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 16 00:04:22.240038 containerd[1964]: time="2025-07-16T00:04:22.239998417Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:22.240816 containerd[1964]: time="2025-07-16T00:04:22.240802643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:22.241184 containerd[1964]: time="2025-07-16T00:04:22.241142504Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 1.947215941s" Jul 16 00:04:22.241184 containerd[1964]: time="2025-07-16T00:04:22.241159775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 16 00:04:22.241663 containerd[1964]: time="2025-07-16T00:04:22.241650245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 16 00:04:22.244732 containerd[1964]: time="2025-07-16T00:04:22.244712763Z" level=info msg="CreateContainer within sandbox \"c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 16 00:04:22.271086 containerd[1964]: time="2025-07-16T00:04:22.271027170Z" level=info msg="Container 83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:04:22.273850 containerd[1964]: time="2025-07-16T00:04:22.273790324Z" level=info msg="CreateContainer within sandbox \"c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\"" Jul 16 00:04:22.274152 containerd[1964]: time="2025-07-16T00:04:22.274110616Z" level=info msg="StartContainer for \"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\"" Jul 16 00:04:22.274681 containerd[1964]: time="2025-07-16T00:04:22.274644359Z" level=info msg="connecting to shim 83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e" address="unix:///run/containerd/s/88dee8611d5f9c485adef53b102bca50240ea678a4804bfddc5c851dc1047204" protocol=ttrpc version=3 Jul 16 00:04:22.294993 systemd[1]: Started cri-containerd-83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e.scope - libcontainer container 83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e. Jul 16 00:04:22.322643 containerd[1964]: time="2025-07-16T00:04:22.322620012Z" level=info msg="StartContainer for \"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" returns successfully" Jul 16 00:04:22.483896 systemd-networkd[1878]: cali0c150315a34: Gained IPv6LL Jul 16 00:04:22.679438 kubelet[3310]: I0716 00:04:22.679353 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7778486458-86hwh" podStartSLOduration=22.089227742 podStartE2EDuration="25.679340369s" podCreationTimestamp="2025-07-16 00:03:57 +0000 UTC" firstStartedPulling="2025-07-16 00:04:18.651477402 +0000 UTC m=+37.175963091" lastFinishedPulling="2025-07-16 00:04:22.241590029 +0000 UTC m=+40.766075718" observedRunningTime="2025-07-16 00:04:22.67871063 +0000 UTC m=+41.203196318" watchObservedRunningTime="2025-07-16 00:04:22.679340369 +0000 UTC m=+41.203826054" Jul 16 00:04:23.708801 containerd[1964]: time="2025-07-16T00:04:23.708773282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"e09521511f8dfb30b38a5cd0504745f099e970fe7981f63d98561702d44e33ce\" pid:6205 exited_at:{seconds:1752624263 nanos:708620858}" Jul 16 00:04:24.138453 containerd[1964]: time="2025-07-16T00:04:24.138379694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:24.138618 containerd[1964]: time="2025-07-16T00:04:24.138601947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 16 00:04:24.139065 containerd[1964]: time="2025-07-16T00:04:24.139022295Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:24.139938 containerd[1964]: time="2025-07-16T00:04:24.139925679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:24.140327 containerd[1964]: time="2025-07-16T00:04:24.140313962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.89864862s" Jul 16 00:04:24.140362 containerd[1964]: time="2025-07-16T00:04:24.140329324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 16 00:04:24.140770 containerd[1964]: time="2025-07-16T00:04:24.140757764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 16 00:04:24.141314 containerd[1964]: time="2025-07-16T00:04:24.141300594Z" level=info msg="CreateContainer within sandbox \"e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 16 00:04:24.143932 containerd[1964]: time="2025-07-16T00:04:24.143919078Z" level=info msg="Container 7dcdc04aca38c675f83dcc3e8a31c3851b47b4da5107acbaf1cab758990ee5dd: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:04:24.148647 containerd[1964]: time="2025-07-16T00:04:24.148603978Z" level=info msg="CreateContainer within sandbox \"e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7dcdc04aca38c675f83dcc3e8a31c3851b47b4da5107acbaf1cab758990ee5dd\"" Jul 16 00:04:24.148945 containerd[1964]: time="2025-07-16T00:04:24.148907670Z" level=info msg="StartContainer for \"7dcdc04aca38c675f83dcc3e8a31c3851b47b4da5107acbaf1cab758990ee5dd\"" Jul 16 00:04:24.149467 containerd[1964]: time="2025-07-16T00:04:24.149426046Z" level=info msg="connecting to shim 7dcdc04aca38c675f83dcc3e8a31c3851b47b4da5107acbaf1cab758990ee5dd" address="unix:///run/containerd/s/82461aa794aa463a867afb4563435228cecd872707b82e9bd0834867e511a5b5" protocol=ttrpc version=3 Jul 16 00:04:24.174912 systemd[1]: Started cri-containerd-7dcdc04aca38c675f83dcc3e8a31c3851b47b4da5107acbaf1cab758990ee5dd.scope - libcontainer container 7dcdc04aca38c675f83dcc3e8a31c3851b47b4da5107acbaf1cab758990ee5dd. Jul 16 00:04:24.203659 containerd[1964]: time="2025-07-16T00:04:24.203638466Z" level=info msg="StartContainer for \"7dcdc04aca38c675f83dcc3e8a31c3851b47b4da5107acbaf1cab758990ee5dd\" returns successfully" Jul 16 00:04:24.699660 kubelet[3310]: I0716 00:04:24.699509 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74b875fcc5-pv79p" podStartSLOduration=26.327957775 podStartE2EDuration="30.699457593s" podCreationTimestamp="2025-07-16 00:03:54 +0000 UTC" firstStartedPulling="2025-07-16 00:04:19.769209168 +0000 UTC m=+38.293694856" lastFinishedPulling="2025-07-16 00:04:24.140708986 +0000 UTC m=+42.665194674" observedRunningTime="2025-07-16 00:04:24.699273094 +0000 UTC m=+43.223758856" watchObservedRunningTime="2025-07-16 00:04:24.699457593 +0000 UTC m=+43.223943334" Jul 16 00:04:25.606840 containerd[1964]: time="2025-07-16T00:04:25.606811016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:25.607086 containerd[1964]: time="2025-07-16T00:04:25.607015476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 16 00:04:25.607374 containerd[1964]: time="2025-07-16T00:04:25.607361142Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:25.608437 containerd[1964]: time="2025-07-16T00:04:25.608426106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:25.608666 containerd[1964]: time="2025-07-16T00:04:25.608651958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.46787372s" Jul 16 00:04:25.608698 containerd[1964]: time="2025-07-16T00:04:25.608671727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 16 00:04:25.609152 containerd[1964]: time="2025-07-16T00:04:25.609111778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 16 00:04:25.609817 containerd[1964]: time="2025-07-16T00:04:25.609801081Z" level=info msg="CreateContainer within sandbox \"b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 16 00:04:25.613128 containerd[1964]: time="2025-07-16T00:04:25.613082583Z" level=info msg="Container 34a4b777ee78ae72688be2a1f5e20ada47f66b36567d07f5f4b66f0d38fd70bc: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:04:25.618316 containerd[1964]: time="2025-07-16T00:04:25.618275071Z" level=info msg="CreateContainer within sandbox \"b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"34a4b777ee78ae72688be2a1f5e20ada47f66b36567d07f5f4b66f0d38fd70bc\"" Jul 16 00:04:25.618580 containerd[1964]: time="2025-07-16T00:04:25.618525693Z" level=info msg="StartContainer for \"34a4b777ee78ae72688be2a1f5e20ada47f66b36567d07f5f4b66f0d38fd70bc\"" Jul 16 00:04:25.619498 containerd[1964]: time="2025-07-16T00:04:25.619484824Z" level=info msg="connecting to shim 34a4b777ee78ae72688be2a1f5e20ada47f66b36567d07f5f4b66f0d38fd70bc" address="unix:///run/containerd/s/e7ed3ea7684e6e1c3c4ed3173ff13dd13fdda039aea04b746c9871028b4869e0" protocol=ttrpc version=3 Jul 16 00:04:25.649000 systemd[1]: Started cri-containerd-34a4b777ee78ae72688be2a1f5e20ada47f66b36567d07f5f4b66f0d38fd70bc.scope - libcontainer container 34a4b777ee78ae72688be2a1f5e20ada47f66b36567d07f5f4b66f0d38fd70bc. Jul 16 00:04:25.670970 containerd[1964]: time="2025-07-16T00:04:25.670935451Z" level=info msg="StartContainer for \"34a4b777ee78ae72688be2a1f5e20ada47f66b36567d07f5f4b66f0d38fd70bc\" returns successfully" Jul 16 00:04:25.683433 kubelet[3310]: I0716 00:04:25.683411 3310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:04:25.689417 kubelet[3310]: I0716 00:04:25.689350 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lx85x" podStartSLOduration=20.811374468 podStartE2EDuration="28.689323808s" podCreationTimestamp="2025-07-16 00:03:57 +0000 UTC" firstStartedPulling="2025-07-16 00:04:17.731108466 +0000 UTC m=+36.255594154" lastFinishedPulling="2025-07-16 00:04:25.609057805 +0000 UTC m=+44.133543494" observedRunningTime="2025-07-16 00:04:25.68921564 +0000 UTC m=+44.213701329" watchObservedRunningTime="2025-07-16 00:04:25.689323808 +0000 UTC m=+44.213809494" Jul 16 00:04:26.034918 containerd[1964]: time="2025-07-16T00:04:26.034891761Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 16 00:04:26.035123 containerd[1964]: time="2025-07-16T00:04:26.035110234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 16 00:04:26.036187 containerd[1964]: time="2025-07-16T00:04:26.036149927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 427.021382ms" Jul 16 00:04:26.036187 containerd[1964]: time="2025-07-16T00:04:26.036166515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 16 00:04:26.037167 containerd[1964]: time="2025-07-16T00:04:26.037126459Z" level=info msg="CreateContainer within sandbox \"2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 16 00:04:26.039883 containerd[1964]: time="2025-07-16T00:04:26.039839642Z" level=info msg="Container 3f8c5cbbcc9ceeb72e624fd6b848d00b2506b81f083de63f3d8a3a4048c6bbe1: CDI devices from CRI Config.CDIDevices: []" Jul 16 00:04:26.044481 containerd[1964]: time="2025-07-16T00:04:26.044467340Z" level=info msg="CreateContainer within sandbox \"2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3f8c5cbbcc9ceeb72e624fd6b848d00b2506b81f083de63f3d8a3a4048c6bbe1\"" Jul 16 00:04:26.044688 containerd[1964]: time="2025-07-16T00:04:26.044677429Z" level=info msg="StartContainer for \"3f8c5cbbcc9ceeb72e624fd6b848d00b2506b81f083de63f3d8a3a4048c6bbe1\"" Jul 16 00:04:26.045210 containerd[1964]: time="2025-07-16T00:04:26.045199056Z" level=info msg="connecting to shim 3f8c5cbbcc9ceeb72e624fd6b848d00b2506b81f083de63f3d8a3a4048c6bbe1" address="unix:///run/containerd/s/b6c0da203fcafe6de7cf334e31c5454a6cb1b0da6a891be4ff88356986a5acdd" protocol=ttrpc version=3 Jul 16 00:04:26.061966 systemd[1]: Started cri-containerd-3f8c5cbbcc9ceeb72e624fd6b848d00b2506b81f083de63f3d8a3a4048c6bbe1.scope - libcontainer container 3f8c5cbbcc9ceeb72e624fd6b848d00b2506b81f083de63f3d8a3a4048c6bbe1. Jul 16 00:04:26.103135 containerd[1964]: time="2025-07-16T00:04:26.103107824Z" level=info msg="StartContainer for \"3f8c5cbbcc9ceeb72e624fd6b848d00b2506b81f083de63f3d8a3a4048c6bbe1\" returns successfully" Jul 16 00:04:26.561836 kubelet[3310]: I0716 00:04:26.561745 3310 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 16 00:04:26.562942 kubelet[3310]: I0716 00:04:26.561905 3310 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 16 00:04:26.695174 kubelet[3310]: I0716 00:04:26.695117 3310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74b875fcc5-9xcms" podStartSLOduration=27.290211663 podStartE2EDuration="32.695100588s" podCreationTimestamp="2025-07-16 00:03:54 +0000 UTC" firstStartedPulling="2025-07-16 00:04:20.631658813 +0000 UTC m=+39.156144501" lastFinishedPulling="2025-07-16 00:04:26.036547737 +0000 UTC m=+44.561033426" observedRunningTime="2025-07-16 00:04:26.694476393 +0000 UTC m=+45.218962097" watchObservedRunningTime="2025-07-16 00:04:26.695100588 +0000 UTC m=+45.219586283" Jul 16 00:04:27.062370 kubelet[3310]: I0716 00:04:27.062335 3310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:04:27.134045 containerd[1964]: time="2025-07-16T00:04:27.134006496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"429d7078cabece2e78219e3c6d62da901d5e36af0588e97d984d0f73c09a6d3f\" pid:6384 exit_status:1 exited_at:{seconds:1752624267 nanos:133757334}" Jul 16 00:04:27.186597 containerd[1964]: time="2025-07-16T00:04:27.186540863Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"8d45c68abbb5090c59ec531cbda81482dd705a2cba057d9a6d36c0a2c6dc8959\" pid:6419 exit_status:1 exited_at:{seconds:1752624267 nanos:186338367}" Jul 16 00:04:27.689527 kubelet[3310]: I0716 00:04:27.689467 3310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:04:28.562225 systemd[1]: Started sshd@9-147.75.203.227:22-190.128.241.2:55450.service - OpenSSH per-connection server daemon (190.128.241.2:55450). Jul 16 00:04:29.879000 sshd[6438]: Received disconnect from 190.128.241.2 port 55450:11: Bye Bye [preauth] Jul 16 00:04:29.879000 sshd[6438]: Disconnected from authenticating user root 190.128.241.2 port 55450 [preauth] Jul 16 00:04:29.884125 systemd[1]: sshd@9-147.75.203.227:22-190.128.241.2:55450.service: Deactivated successfully. Jul 16 00:04:30.214707 kubelet[3310]: I0716 00:04:30.214682 3310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:04:39.990077 containerd[1964]: time="2025-07-16T00:04:39.990018483Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"17490d14d78725e363f3f4927db41a02271dc01b363cca82e1ad2801e32457bc\" pid:6464 exited_at:{seconds:1752624279 nanos:989788644}" Jul 16 00:04:41.743441 containerd[1964]: time="2025-07-16T00:04:41.743414287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"71fa02c77bf9858fd8efaa0f31d6c3164e36735480f73a1e797e178a4f746439\" pid:6502 exited_at:{seconds:1752624281 nanos:743314283}" Jul 16 00:04:43.930061 containerd[1964]: time="2025-07-16T00:04:43.930021446Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"92c5261b16d7bdf0c2179c84ba4f64d8de59bba9631c335de278171c11c36e76\" pid:6523 exited_at:{seconds:1752624283 nanos:929814393}" Jul 16 00:04:57.158271 containerd[1964]: time="2025-07-16T00:04:57.158217073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"9e393d2ff07be72c49bb9b1cfea30ba34fb8620535bd5d41ddfb6fbc6da60c53\" pid:6566 exited_at:{seconds:1752624297 nanos:157963298}" Jul 16 00:05:01.218872 kubelet[3310]: I0716 00:05:01.218815 3310 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 16 00:05:11.760115 containerd[1964]: time="2025-07-16T00:05:11.760083113Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"0ec3e67f3fef7d2f83f3c6dc0a30f4743bb35deb0e947f6d8861c54cedb5e80c\" pid:6606 exited_at:{seconds:1752624311 nanos:759900640}" Jul 16 00:05:13.905739 containerd[1964]: time="2025-07-16T00:05:13.905704843Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"f4c90776e94863e33a172b53225c2e88485a876f4086e0fb0fd3d5b8d742d0e0\" pid:6628 exited_at:{seconds:1752624313 nanos:905305790}" Jul 16 00:05:18.837101 update_engine[1952]: I20250716 00:05:18.836967 1952 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 16 00:05:18.837101 update_engine[1952]: I20250716 00:05:18.837059 1952 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 16 00:05:18.838118 update_engine[1952]: I20250716 00:05:18.837407 1952 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 16 00:05:18.838502 update_engine[1952]: I20250716 00:05:18.838398 1952 omaha_request_params.cc:62] Current group set to alpha Jul 16 00:05:18.838676 update_engine[1952]: I20250716 00:05:18.838618 1952 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 16 00:05:18.838676 update_engine[1952]: I20250716 00:05:18.838654 1952 update_attempter.cc:643] Scheduling an action processor start. Jul 16 00:05:18.838920 update_engine[1952]: I20250716 00:05:18.838707 1952 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 16 00:05:18.838920 update_engine[1952]: I20250716 00:05:18.838827 1952 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 16 00:05:18.839115 update_engine[1952]: I20250716 00:05:18.839054 1952 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 16 00:05:18.839225 update_engine[1952]: I20250716 00:05:18.839104 1952 omaha_request_action.cc:272] Request: Jul 16 00:05:18.839225 update_engine[1952]: Jul 16 00:05:18.839225 update_engine[1952]: Jul 16 00:05:18.839225 update_engine[1952]: Jul 16 00:05:18.839225 update_engine[1952]: Jul 16 00:05:18.839225 update_engine[1952]: Jul 16 00:05:18.839225 update_engine[1952]: Jul 16 00:05:18.839225 update_engine[1952]: Jul 16 00:05:18.839225 update_engine[1952]: Jul 16 00:05:18.839225 update_engine[1952]: I20250716 00:05:18.839138 1952 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 00:05:18.840047 locksmithd[2015]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 16 00:05:18.843164 update_engine[1952]: I20250716 00:05:18.843056 1952 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 00:05:18.844067 update_engine[1952]: I20250716 00:05:18.843948 1952 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 00:05:18.844404 update_engine[1952]: E20250716 00:05:18.844307 1952 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 00:05:18.844576 update_engine[1952]: I20250716 00:05:18.844457 1952 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 16 00:05:19.063728 containerd[1964]: time="2025-07-16T00:05:19.063699021Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"369c2290d652fc9e6fc75367e84177f4f394ed71042111ea85e003bad16a5d8c\" pid:6665 exited_at:{seconds:1752624319 nanos:63571777}" Jul 16 00:05:27.133071 containerd[1964]: time="2025-07-16T00:05:27.133040550Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"d4ca572b20c5f02ffb0a3760d497997697f972cc00f7f27d2045bb6d4f975610\" pid:6688 exited_at:{seconds:1752624327 nanos:132763093}" Jul 16 00:05:28.794077 update_engine[1952]: I20250716 00:05:28.793931 1952 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 00:05:28.794901 update_engine[1952]: I20250716 00:05:28.794445 1952 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 00:05:28.795160 update_engine[1952]: I20250716 00:05:28.795051 1952 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 00:05:28.795479 update_engine[1952]: E20250716 00:05:28.795375 1952 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 00:05:28.795657 update_engine[1952]: I20250716 00:05:28.795547 1952 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 16 00:05:38.795077 update_engine[1952]: I20250716 00:05:38.794907 1952 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 00:05:38.796062 update_engine[1952]: I20250716 00:05:38.795495 1952 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 00:05:38.796241 update_engine[1952]: I20250716 00:05:38.796173 1952 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 00:05:38.796800 update_engine[1952]: E20250716 00:05:38.796661 1952 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 00:05:38.796980 update_engine[1952]: I20250716 00:05:38.796871 1952 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 16 00:05:39.999944 containerd[1964]: time="2025-07-16T00:05:39.999915184Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"e76af373f4c03faed74b2abc991ca7263ad56990189461f7085d46bdf8d30bbc\" pid:6728 exited_at:{seconds:1752624339 nanos:999711473}" Jul 16 00:05:41.807330 containerd[1964]: time="2025-07-16T00:05:41.807299142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"3840fe15c51ed52f280cbdd81fb99e038f84a35583b02b5990bac9428785bd80\" pid:6782 exited_at:{seconds:1752624341 nanos:807124783}" Jul 16 00:05:43.873188 containerd[1964]: time="2025-07-16T00:05:43.873131956Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"93776588200a9a4fa2673514164fa573fe2fee2667702b4503c3a1e0d2753059\" pid:6805 exited_at:{seconds:1752624343 nanos:872678131}" Jul 16 00:05:48.794199 update_engine[1952]: I20250716 00:05:48.794037 1952 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 00:05:48.795061 update_engine[1952]: I20250716 00:05:48.794574 1952 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 00:05:48.795350 update_engine[1952]: I20250716 00:05:48.795245 1952 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 00:05:48.795752 update_engine[1952]: E20250716 00:05:48.795660 1952 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 00:05:48.796042 update_engine[1952]: I20250716 00:05:48.795850 1952 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 16 00:05:48.796042 update_engine[1952]: I20250716 00:05:48.795884 1952 omaha_request_action.cc:617] Omaha request response: Jul 16 00:05:48.796234 update_engine[1952]: E20250716 00:05:48.796048 1952 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 16 00:05:48.796234 update_engine[1952]: I20250716 00:05:48.796094 1952 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 16 00:05:48.796234 update_engine[1952]: I20250716 00:05:48.796109 1952 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 16 00:05:48.796234 update_engine[1952]: I20250716 00:05:48.796123 1952 update_attempter.cc:306] Processing Done. Jul 16 00:05:48.796234 update_engine[1952]: E20250716 00:05:48.796154 1952 update_attempter.cc:619] Update failed. Jul 16 00:05:48.796234 update_engine[1952]: I20250716 00:05:48.796168 1952 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 16 00:05:48.796234 update_engine[1952]: I20250716 00:05:48.796182 1952 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 16 00:05:48.796234 update_engine[1952]: I20250716 00:05:48.796196 1952 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 16 00:05:48.796898 update_engine[1952]: I20250716 00:05:48.796351 1952 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 16 00:05:48.796898 update_engine[1952]: I20250716 00:05:48.796435 1952 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 16 00:05:48.796898 update_engine[1952]: I20250716 00:05:48.796468 1952 omaha_request_action.cc:272] Request: Jul 16 00:05:48.796898 update_engine[1952]: Jul 16 00:05:48.796898 update_engine[1952]: Jul 16 00:05:48.796898 update_engine[1952]: Jul 16 00:05:48.796898 update_engine[1952]: Jul 16 00:05:48.796898 update_engine[1952]: Jul 16 00:05:48.796898 update_engine[1952]: Jul 16 00:05:48.796898 update_engine[1952]: I20250716 00:05:48.796494 1952 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 16 00:05:48.797663 update_engine[1952]: I20250716 00:05:48.797050 1952 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 16 00:05:48.797663 update_engine[1952]: I20250716 00:05:48.797593 1952 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 16 00:05:48.797883 locksmithd[2015]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 16 00:05:48.798418 update_engine[1952]: E20250716 00:05:48.797914 1952 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 16 00:05:48.798418 update_engine[1952]: I20250716 00:05:48.798003 1952 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 16 00:05:48.798418 update_engine[1952]: I20250716 00:05:48.798027 1952 omaha_request_action.cc:617] Omaha request response: Jul 16 00:05:48.798418 update_engine[1952]: I20250716 00:05:48.798043 1952 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 16 00:05:48.798418 update_engine[1952]: I20250716 00:05:48.798056 1952 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 16 00:05:48.798418 update_engine[1952]: I20250716 00:05:48.798069 1952 update_attempter.cc:306] Processing Done. Jul 16 00:05:48.798418 update_engine[1952]: I20250716 00:05:48.798084 1952 update_attempter.cc:310] Error event sent. Jul 16 00:05:48.798418 update_engine[1952]: I20250716 00:05:48.798106 1952 update_check_scheduler.cc:74] Next update check in 44m30s Jul 16 00:05:48.798535 locksmithd[2015]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 16 00:05:51.286165 systemd[1]: Started sshd@10-147.75.203.227:22-190.128.241.2:54770.service - OpenSSH per-connection server daemon (190.128.241.2:54770). Jul 16 00:05:52.606056 sshd[6839]: Received disconnect from 190.128.241.2 port 54770:11: Bye Bye [preauth] Jul 16 00:05:52.606056 sshd[6839]: Disconnected from authenticating user root 190.128.241.2 port 54770 [preauth] Jul 16 00:05:52.609578 systemd[1]: sshd@10-147.75.203.227:22-190.128.241.2:54770.service: Deactivated successfully. Jul 16 00:05:57.121113 containerd[1964]: time="2025-07-16T00:05:57.121090262Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"3e7d338b254894195fbeec6da5612b0556f968848b90b60c91b811ac0c3979a6\" pid:6855 exited_at:{seconds:1752624357 nanos:120848890}" Jul 16 00:06:11.807574 containerd[1964]: time="2025-07-16T00:06:11.807513986Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"640a71cf48d29c9d001749517850cd8406fd6713a3f06eb04b995935f218246c\" pid:6887 exited_at:{seconds:1752624371 nanos:807370398}" Jul 16 00:06:13.890140 containerd[1964]: time="2025-07-16T00:06:13.890082872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"16147058bf9c066bd6686799d25dab716de63faba7dc59508c763f3cf3e4781a\" pid:6909 exited_at:{seconds:1752624373 nanos:889818826}" Jul 16 00:06:19.057045 containerd[1964]: time="2025-07-16T00:06:19.057017110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"3c04f82a7c839fdd2ce0d29abf5a8c812786cea37437037984aae98d9d724902\" pid:6948 exited_at:{seconds:1752624379 nanos:56877240}" Jul 16 00:06:27.135573 containerd[1964]: time="2025-07-16T00:06:27.135537418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"8bd7ed7f22e99aa462f7a7542f0f1a307371943cd7f483013863a842b53c7652\" pid:6970 exited_at:{seconds:1752624387 nanos:135267653}" Jul 16 00:06:40.007190 containerd[1964]: time="2025-07-16T00:06:40.007153482Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"fb8703da8a453e5409cf7b7358952a7becfbe7746ca78a8d574d90efb9e46c40\" pid:7002 exited_at:{seconds:1752624400 nanos:6917387}" Jul 16 00:06:41.796362 containerd[1964]: time="2025-07-16T00:06:41.796328924Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"a9c6b3ccac302b92e2912b571ac51159a9c6c98b129709a5d9cdb51b54d4f432\" pid:7038 exited_at:{seconds:1752624401 nanos:796109693}" Jul 16 00:06:43.876995 containerd[1964]: time="2025-07-16T00:06:43.876969654Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"08a2956d6d4c3fb371c4d09062b9b6f85d5f4ea393726ee4c6181d0dc068c4b0\" pid:7060 exited_at:{seconds:1752624403 nanos:876748658}" Jul 16 00:06:57.119548 containerd[1964]: time="2025-07-16T00:06:57.119520460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"b525629851a4f2eb3e3a87dfe2dfc7cef1ebc616b9c8f80a83dfe75e5c4fd3d5\" pid:7104 exited_at:{seconds:1752624417 nanos:119269280}" Jul 16 00:07:10.688585 systemd[1]: Started sshd@11-147.75.203.227:22-190.128.241.2:38856.service - OpenSSH per-connection server daemon (190.128.241.2:38856). Jul 16 00:07:11.764237 containerd[1964]: time="2025-07-16T00:07:11.764213377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"3ed28c88d2b5f9a7750dfd86f66518a44eae198f57eb7518ce2595760ce688ac\" pid:7140 exited_at:{seconds:1752624431 nanos:764102476}" Jul 16 00:07:12.021504 sshd[7126]: Received disconnect from 190.128.241.2 port 38856:11: Bye Bye [preauth] Jul 16 00:07:12.021504 sshd[7126]: Disconnected from authenticating user root 190.128.241.2 port 38856 [preauth] Jul 16 00:07:12.024940 systemd[1]: sshd@11-147.75.203.227:22-190.128.241.2:38856.service: Deactivated successfully. Jul 16 00:07:13.874854 containerd[1964]: time="2025-07-16T00:07:13.874747928Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"e083792c4d38ec136cba09970ffdbe0ad6114c7427bcf7d6c4d8e5eb0dcbbb81\" pid:7164 exited_at:{seconds:1752624433 nanos:874491474}" Jul 16 00:07:19.111654 containerd[1964]: time="2025-07-16T00:07:19.111621040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"494eaeed64a3846efdaaef1ef20aecd153a9d0f8cc00bd7334d2cb016c6fd886\" pid:7219 exited_at:{seconds:1752624439 nanos:111480159}" Jul 16 00:07:27.127061 containerd[1964]: time="2025-07-16T00:07:27.127009019Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"78fa1445e02464ae043671b8a1f3bb67425e814ffe74ec7377b1ed90e86cf3a6\" pid:7248 exited_at:{seconds:1752624447 nanos:126790607}" Jul 16 00:07:39.992820 containerd[1964]: time="2025-07-16T00:07:39.992749421Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"157868946088e90ac61e3b137da956abfa07b9f4d276e389419fae14496882b5\" pid:7278 exited_at:{seconds:1752624459 nanos:992539680}" Jul 16 00:07:41.807701 containerd[1964]: time="2025-07-16T00:07:41.807671923Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"7d21cd2a7a2fae3e290fe17031958e2ab08621495d65f2968e011821253d6c24\" pid:7314 exited_at:{seconds:1752624461 nanos:807551937}" Jul 16 00:07:43.883042 containerd[1964]: time="2025-07-16T00:07:43.883011672Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"03189bf0166762f6d98ccbce2d49293eb80d6cddf819688bab7b534cbdc8ba26\" pid:7335 exited_at:{seconds:1752624463 nanos:882773125}" Jul 16 00:07:57.130978 containerd[1964]: time="2025-07-16T00:07:57.130950642Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"6ac09a655823498ca28ba687746980a7353da310efb13351412534d8e7bb8ba5\" pid:7375 exited_at:{seconds:1752624477 nanos:130709720}" Jul 16 00:08:11.812523 containerd[1964]: time="2025-07-16T00:08:11.812496163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"09baaae0d955a1ebdc98916fa73a253fb9134adad43fcd1ae38eca107a8101d6\" pid:7410 exited_at:{seconds:1752624491 nanos:812364255}" Jul 16 00:08:13.877239 containerd[1964]: time="2025-07-16T00:08:13.877210077Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"eb869c7fe3d08f10f8aab690a0420f8f0821781d1dcbcadef009f15c26f03c67\" pid:7432 exited_at:{seconds:1752624493 nanos:876998039}" Jul 16 00:08:19.074677 containerd[1964]: time="2025-07-16T00:08:19.074643023Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"0dca2f7b4ef8537d098c3915b12c14d23b4567ef32fb0363ba05ead8352c9712\" pid:7471 exited_at:{seconds:1752624499 nanos:74414700}" Jul 16 00:08:27.166069 containerd[1964]: time="2025-07-16T00:08:27.166038041Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"004f8f271a8a85774ce46aabb8470bda14681766ff9999b8148790b68a4d3dc6\" pid:7492 exited_at:{seconds:1752624507 nanos:165806897}" Jul 16 00:08:31.310369 systemd[1]: Started sshd@12-147.75.203.227:22-106.241.54.211:37348.service - OpenSSH per-connection server daemon (106.241.54.211:37348). Jul 16 00:08:32.117060 sshd[7517]: Invalid user keshav from 106.241.54.211 port 37348 Jul 16 00:08:32.258373 sshd[7517]: Received disconnect from 106.241.54.211 port 37348:11: Bye Bye [preauth] Jul 16 00:08:32.258373 sshd[7517]: Disconnected from invalid user keshav 106.241.54.211 port 37348 [preauth] Jul 16 00:08:32.261836 systemd[1]: sshd@12-147.75.203.227:22-106.241.54.211:37348.service: Deactivated successfully. Jul 16 00:08:32.662931 systemd[1]: Started sshd@13-147.75.203.227:22-190.128.241.2:39292.service - OpenSSH per-connection server daemon (190.128.241.2:39292). Jul 16 00:08:34.051580 sshd[7524]: Received disconnect from 190.128.241.2 port 39292:11: Bye Bye [preauth] Jul 16 00:08:34.051580 sshd[7524]: Disconnected from authenticating user root 190.128.241.2 port 39292 [preauth] Jul 16 00:08:34.055281 systemd[1]: sshd@13-147.75.203.227:22-190.128.241.2:39292.service: Deactivated successfully. Jul 16 00:08:37.874674 containerd[1964]: time="2025-07-16T00:08:37.874416547Z" level=warning msg="container event discarded" container=f9ca418393a366aac18caf21caee0aaac0414e34a4f589f3eb402af9d96e9cf9 type=CONTAINER_CREATED_EVENT Jul 16 00:08:37.874674 containerd[1964]: time="2025-07-16T00:08:37.874619988Z" level=warning msg="container event discarded" container=f9ca418393a366aac18caf21caee0aaac0414e34a4f589f3eb402af9d96e9cf9 type=CONTAINER_STARTED_EVENT Jul 16 00:08:37.892062 containerd[1964]: time="2025-07-16T00:08:37.891951977Z" level=warning msg="container event discarded" container=3b387e5707ade3f6ec8f7ec78358739bbdd6523878579a791628ae717564b20e type=CONTAINER_CREATED_EVENT Jul 16 00:08:37.919926 containerd[1964]: time="2025-07-16T00:08:37.919894180Z" level=warning msg="container event discarded" container=9a4b8202e475a5b3aa14ca8f029e7033450ce427ab455166ac9f901e99795007 type=CONTAINER_CREATED_EVENT Jul 16 00:08:37.919926 containerd[1964]: time="2025-07-16T00:08:37.919906319Z" level=warning msg="container event discarded" container=9a4b8202e475a5b3aa14ca8f029e7033450ce427ab455166ac9f901e99795007 type=CONTAINER_STARTED_EVENT Jul 16 00:08:37.919926 containerd[1964]: time="2025-07-16T00:08:37.919912101Z" level=warning msg="container event discarded" container=485dc4c250bb9e5fd9b2e178e6de28f4622f43d895ffaf2e5d0fcd04726dbc07 type=CONTAINER_CREATED_EVENT Jul 16 00:08:37.919926 containerd[1964]: time="2025-07-16T00:08:37.919920024Z" level=warning msg="container event discarded" container=485dc4c250bb9e5fd9b2e178e6de28f4622f43d895ffaf2e5d0fcd04726dbc07 type=CONTAINER_STARTED_EVENT Jul 16 00:08:37.919926 containerd[1964]: time="2025-07-16T00:08:37.919925792Z" level=warning msg="container event discarded" container=574ab7d05afedc812aa17ba17a01fc439472d155a60f71f7cca763fdb3fcc3b2 type=CONTAINER_CREATED_EVENT Jul 16 00:08:37.920066 containerd[1964]: time="2025-07-16T00:08:37.919930291Z" level=warning msg="container event discarded" container=3ce679cf4be92ee5e3c0bdad76dbc69239944c1ef8a89e07ec4c531701f9289e type=CONTAINER_CREATED_EVENT Jul 16 00:08:37.948171 containerd[1964]: time="2025-07-16T00:08:37.948119902Z" level=warning msg="container event discarded" container=3b387e5707ade3f6ec8f7ec78358739bbdd6523878579a791628ae717564b20e type=CONTAINER_STARTED_EVENT Jul 16 00:08:37.974654 containerd[1964]: time="2025-07-16T00:08:37.974505052Z" level=warning msg="container event discarded" container=574ab7d05afedc812aa17ba17a01fc439472d155a60f71f7cca763fdb3fcc3b2 type=CONTAINER_STARTED_EVENT Jul 16 00:08:37.974654 containerd[1964]: time="2025-07-16T00:08:37.974603543Z" level=warning msg="container event discarded" container=3ce679cf4be92ee5e3c0bdad76dbc69239944c1ef8a89e07ec4c531701f9289e type=CONTAINER_STARTED_EVENT Jul 16 00:08:39.995399 containerd[1964]: time="2025-07-16T00:08:39.995369630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"7f268007e37b4f4cb58e1c21106d6b8acd2aad5cfd7cf17732d91d65b610803b\" pid:7541 exited_at:{seconds:1752624519 nanos:995176460}" Jul 16 00:08:41.807993 containerd[1964]: time="2025-07-16T00:08:41.807963535Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"8e4fa5d1645e1161d1923eb4aa34af9bec3a6eaff19fb3a0b270a49da7900790\" pid:7577 exited_at:{seconds:1752624521 nanos:807840342}" Jul 16 00:08:43.885512 containerd[1964]: time="2025-07-16T00:08:43.885463005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"01353a64366e4e019bd6a8877d4efbfc4c3ff1e03e5f90acbf979be5951143b0\" pid:7598 exited_at:{seconds:1752624523 nanos:885237114}" Jul 16 00:08:47.361600 containerd[1964]: time="2025-07-16T00:08:47.361412976Z" level=warning msg="container event discarded" container=417f8a259c835d422873f93eb0cc92c3b22cbe439a994da8313b8d96358b2b88 type=CONTAINER_CREATED_EVENT Jul 16 00:08:47.361600 containerd[1964]: time="2025-07-16T00:08:47.361547598Z" level=warning msg="container event discarded" container=417f8a259c835d422873f93eb0cc92c3b22cbe439a994da8313b8d96358b2b88 type=CONTAINER_STARTED_EVENT Jul 16 00:08:47.361600 containerd[1964]: time="2025-07-16T00:08:47.361576024Z" level=warning msg="container event discarded" container=fb2d8dbc267d4527066b04144b1c7149dcf4ff6a795af08a7188620b29be630d type=CONTAINER_CREATED_EVENT Jul 16 00:08:47.416028 containerd[1964]: time="2025-07-16T00:08:47.415886252Z" level=warning msg="container event discarded" container=fb2d8dbc267d4527066b04144b1c7149dcf4ff6a795af08a7188620b29be630d type=CONTAINER_STARTED_EVENT Jul 16 00:08:47.949979 containerd[1964]: time="2025-07-16T00:08:47.949839880Z" level=warning msg="container event discarded" container=10aaa789ccd9aa22ba77bca6046d7050131fe703373107eda07743ae163213f0 type=CONTAINER_CREATED_EVENT Jul 16 00:08:47.949979 containerd[1964]: time="2025-07-16T00:08:47.949938226Z" level=warning msg="container event discarded" container=10aaa789ccd9aa22ba77bca6046d7050131fe703373107eda07743ae163213f0 type=CONTAINER_STARTED_EVENT Jul 16 00:08:50.012456 containerd[1964]: time="2025-07-16T00:08:50.012296828Z" level=warning msg="container event discarded" container=72e6e861ea9ed8a7b5135dfd99a816845006d566893aa35f1693df0102b30dcb type=CONTAINER_CREATED_EVENT Jul 16 00:08:50.049006 containerd[1964]: time="2025-07-16T00:08:50.048860473Z" level=warning msg="container event discarded" container=72e6e861ea9ed8a7b5135dfd99a816845006d566893aa35f1693df0102b30dcb type=CONTAINER_STARTED_EVENT Jul 16 00:08:57.130682 containerd[1964]: time="2025-07-16T00:08:57.130654057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"2ae527b915f3bbdce897fddc5b3f54b87158d57945dcab3450e3c94dcd3c0fb0\" pid:7661 exited_at:{seconds:1752624537 nanos:130368276}" Jul 16 00:08:57.412459 containerd[1964]: time="2025-07-16T00:08:57.412188600Z" level=warning msg="container event discarded" container=8ed280399c4debd44ef1dd4de320c387c6ec58d59b74eaab5681338465a4dc67 type=CONTAINER_CREATED_EVENT Jul 16 00:08:57.412459 containerd[1964]: time="2025-07-16T00:08:57.412277073Z" level=warning msg="container event discarded" container=8ed280399c4debd44ef1dd4de320c387c6ec58d59b74eaab5681338465a4dc67 type=CONTAINER_STARTED_EVENT Jul 16 00:08:57.651358 containerd[1964]: time="2025-07-16T00:08:57.651215586Z" level=warning msg="container event discarded" container=a2aeb2cd09f84404b28852c92d466eb588ba300bb8665f30c8508debdc88052c type=CONTAINER_CREATED_EVENT Jul 16 00:08:57.651358 containerd[1964]: time="2025-07-16T00:08:57.651295745Z" level=warning msg="container event discarded" container=a2aeb2cd09f84404b28852c92d466eb588ba300bb8665f30c8508debdc88052c type=CONTAINER_STARTED_EVENT Jul 16 00:08:59.164608 containerd[1964]: time="2025-07-16T00:08:59.164450145Z" level=warning msg="container event discarded" container=20ff5043ada4cb747d606a44f57ffa33228276f8f3a08858e6faa813787b8050 type=CONTAINER_CREATED_EVENT Jul 16 00:08:59.214105 containerd[1964]: time="2025-07-16T00:08:59.213951885Z" level=warning msg="container event discarded" container=20ff5043ada4cb747d606a44f57ffa33228276f8f3a08858e6faa813787b8050 type=CONTAINER_STARTED_EVENT Jul 16 00:09:00.543082 containerd[1964]: time="2025-07-16T00:09:00.542931148Z" level=warning msg="container event discarded" container=16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef type=CONTAINER_CREATED_EVENT Jul 16 00:09:00.588586 containerd[1964]: time="2025-07-16T00:09:00.588517165Z" level=warning msg="container event discarded" container=16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef type=CONTAINER_STARTED_EVENT Jul 16 00:09:01.507152 containerd[1964]: time="2025-07-16T00:09:01.506931811Z" level=warning msg="container event discarded" container=16e8c1b91905e6fa14c9e2a65e22f45c828e71837245a0f87ae985a45374b4ef type=CONTAINER_STOPPED_EVENT Jul 16 00:09:03.982862 containerd[1964]: time="2025-07-16T00:09:03.982729991Z" level=warning msg="container event discarded" container=8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf type=CONTAINER_CREATED_EVENT Jul 16 00:09:04.016225 containerd[1964]: time="2025-07-16T00:09:04.016028277Z" level=warning msg="container event discarded" container=8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf type=CONTAINER_STARTED_EVENT Jul 16 00:09:04.991622 containerd[1964]: time="2025-07-16T00:09:04.991450309Z" level=warning msg="container event discarded" container=8521c59863264bb672db6fb8e9d1340f08e7d96dcb1ae0090694146ca8455cbf type=CONTAINER_STOPPED_EVENT Jul 16 00:09:08.884428 containerd[1964]: time="2025-07-16T00:09:08.884264271Z" level=warning msg="container event discarded" container=d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601 type=CONTAINER_CREATED_EVENT Jul 16 00:09:08.924955 containerd[1964]: time="2025-07-16T00:09:08.924805766Z" level=warning msg="container event discarded" container=d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601 type=CONTAINER_STARTED_EVENT Jul 16 00:09:10.182966 containerd[1964]: time="2025-07-16T00:09:10.182813236Z" level=warning msg="container event discarded" container=02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48 type=CONTAINER_CREATED_EVENT Jul 16 00:09:10.182966 containerd[1964]: time="2025-07-16T00:09:10.182905958Z" level=warning msg="container event discarded" container=02aeea75ca17532ffb21d4e706b7844421580f64f4b5a8ec42ffdeadb834dc48 type=CONTAINER_STARTED_EVENT Jul 16 00:09:11.581791 containerd[1964]: time="2025-07-16T00:09:11.581654727Z" level=warning msg="container event discarded" container=c3c25102955ce36aa9ba71684bddf27c41c660ddbab20be72055322fe1312be1 type=CONTAINER_CREATED_EVENT Jul 16 00:09:11.624000 containerd[1964]: time="2025-07-16T00:09:11.623905240Z" level=warning msg="container event discarded" container=c3c25102955ce36aa9ba71684bddf27c41c660ddbab20be72055322fe1312be1 type=CONTAINER_STARTED_EVENT Jul 16 00:09:11.774183 containerd[1964]: time="2025-07-16T00:09:11.774152770Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"7d229adf7ee8e236a90e3c242145a3c0405fd3c1157ac62c5d316593b18f8698\" pid:7694 exited_at:{seconds:1752624551 nanos:773964495}" Jul 16 00:09:13.450247 containerd[1964]: time="2025-07-16T00:09:13.450128427Z" level=warning msg="container event discarded" container=9766fa2683ff9aed6176b3dc2765fe6bd641cfeb107d4fb7dbf931662d0ebc10 type=CONTAINER_CREATED_EVENT Jul 16 00:09:13.500786 containerd[1964]: time="2025-07-16T00:09:13.500624843Z" level=warning msg="container event discarded" container=9766fa2683ff9aed6176b3dc2765fe6bd641cfeb107d4fb7dbf931662d0ebc10 type=CONTAINER_STARTED_EVENT Jul 16 00:09:13.877633 containerd[1964]: time="2025-07-16T00:09:13.877542536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"bfd4873a30de68242425170db22d22a4ca9e762d1fcb611175057eaf97ce0ed1\" pid:7716 exited_at:{seconds:1752624553 nanos:877316420}" Jul 16 00:09:16.703701 containerd[1964]: time="2025-07-16T00:09:16.703492219Z" level=warning msg="container event discarded" container=6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea type=CONTAINER_CREATED_EVENT Jul 16 00:09:16.703701 containerd[1964]: time="2025-07-16T00:09:16.703644060Z" level=warning msg="container event discarded" container=6f70e8c97661a727dcc64ab4d0f8f570ac1cd3bf812a616db378ccbd8e6bc5ea type=CONTAINER_STARTED_EVENT Jul 16 00:09:17.647894 containerd[1964]: time="2025-07-16T00:09:17.647748519Z" level=warning msg="container event discarded" container=f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1 type=CONTAINER_CREATED_EVENT Jul 16 00:09:17.647894 containerd[1964]: time="2025-07-16T00:09:17.647877788Z" level=warning msg="container event discarded" container=f4bf5d72e3ac29ad7b416ee699762c515423d1ae2ea30165298e49ab396c26a1 type=CONTAINER_STARTED_EVENT Jul 16 00:09:17.647894 containerd[1964]: time="2025-07-16T00:09:17.647906060Z" level=warning msg="container event discarded" container=4842fa39c7e192e6e7bdc6e606e1ad7012592b85ca3c20f2076af9946b86223c type=CONTAINER_CREATED_EVENT Jul 16 00:09:17.694167 containerd[1964]: time="2025-07-16T00:09:17.694027455Z" level=warning msg="container event discarded" container=4842fa39c7e192e6e7bdc6e606e1ad7012592b85ca3c20f2076af9946b86223c type=CONTAINER_STARTED_EVENT Jul 16 00:09:17.741734 containerd[1964]: time="2025-07-16T00:09:17.741560071Z" level=warning msg="container event discarded" container=b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9 type=CONTAINER_CREATED_EVENT Jul 16 00:09:17.741734 containerd[1964]: time="2025-07-16T00:09:17.741660661Z" level=warning msg="container event discarded" container=b66182c7dd9e6876d0f53f9cce9b704aad4558fd91fccefa7faa1286c94819c9 type=CONTAINER_STARTED_EVENT Jul 16 00:09:18.662025 containerd[1964]: time="2025-07-16T00:09:18.661879483Z" level=warning msg="container event discarded" container=c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b type=CONTAINER_CREATED_EVENT Jul 16 00:09:18.662025 containerd[1964]: time="2025-07-16T00:09:18.661965690Z" level=warning msg="container event discarded" container=c5144c5b37265ee9b7de88301fe4fa5605c06b828b52d16814882c2e8f91ad0b type=CONTAINER_STARTED_EVENT Jul 16 00:09:18.853542 containerd[1964]: time="2025-07-16T00:09:18.853466614Z" level=warning msg="container event discarded" container=374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9 type=CONTAINER_CREATED_EVENT Jul 16 00:09:18.897860 containerd[1964]: time="2025-07-16T00:09:18.897701328Z" level=warning msg="container event discarded" container=374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9 type=CONTAINER_STARTED_EVENT Jul 16 00:09:19.110559 containerd[1964]: time="2025-07-16T00:09:19.110530925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"4c2ed121f7708cc4415288ed637d5f7a7f26a76326743021a50900c0c6e1d26d\" pid:7753 exited_at:{seconds:1752624559 nanos:110396401}" Jul 16 00:09:19.722523 containerd[1964]: time="2025-07-16T00:09:19.722344394Z" level=warning msg="container event discarded" container=1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1 type=CONTAINER_CREATED_EVENT Jul 16 00:09:19.722523 containerd[1964]: time="2025-07-16T00:09:19.722472073Z" level=warning msg="container event discarded" container=1d0c3cb8041c5663c0b3bf6e8e1a61ec1b39aa996f1cd3f7fb038bbd3fc17ed1 type=CONTAINER_STARTED_EVENT Jul 16 00:09:19.722523 containerd[1964]: time="2025-07-16T00:09:19.722504472Z" level=warning msg="container event discarded" container=9fa115f2a2e571b7378565917bb1a34f6f510136c53e7dcb40d83381919a3ec1 type=CONTAINER_CREATED_EVENT Jul 16 00:09:19.764088 containerd[1964]: time="2025-07-16T00:09:19.763923023Z" level=warning msg="container event discarded" container=9fa115f2a2e571b7378565917bb1a34f6f510136c53e7dcb40d83381919a3ec1 type=CONTAINER_STARTED_EVENT Jul 16 00:09:19.779546 containerd[1964]: time="2025-07-16T00:09:19.779425238Z" level=warning msg="container event discarded" container=e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f type=CONTAINER_CREATED_EVENT Jul 16 00:09:19.779546 containerd[1964]: time="2025-07-16T00:09:19.779490988Z" level=warning msg="container event discarded" container=e5fa309445e7c2844888e639f6410890e3ec0a3a7cae52eae3d64ad5c567bd8f type=CONTAINER_STARTED_EVENT Jul 16 00:09:20.311583 containerd[1964]: time="2025-07-16T00:09:20.311431835Z" level=warning msg="container event discarded" container=113cd57ef479c9102434a8ab844556ea156999fd55713e95bf711d6b62e3e321 type=CONTAINER_CREATED_EVENT Jul 16 00:09:20.353024 containerd[1964]: time="2025-07-16T00:09:20.352881750Z" level=warning msg="container event discarded" container=113cd57ef479c9102434a8ab844556ea156999fd55713e95bf711d6b62e3e321 type=CONTAINER_STARTED_EVENT Jul 16 00:09:20.642455 containerd[1964]: time="2025-07-16T00:09:20.642158173Z" level=warning msg="container event discarded" container=2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935 type=CONTAINER_CREATED_EVENT Jul 16 00:09:20.642455 containerd[1964]: time="2025-07-16T00:09:20.642275609Z" level=warning msg="container event discarded" container=2dd42710959824d43d7b8b33a325b4570110b69bda9e7cf4e03b57cdfc640935 type=CONTAINER_STARTED_EVENT Jul 16 00:09:22.283718 containerd[1964]: time="2025-07-16T00:09:22.283611946Z" level=warning msg="container event discarded" container=83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e type=CONTAINER_CREATED_EVENT Jul 16 00:09:22.333146 containerd[1964]: time="2025-07-16T00:09:22.333016586Z" level=warning msg="container event discarded" container=83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e type=CONTAINER_STARTED_EVENT Jul 16 00:09:24.159432 containerd[1964]: time="2025-07-16T00:09:24.159315762Z" level=warning msg="container event discarded" container=7dcdc04aca38c675f83dcc3e8a31c3851b47b4da5107acbaf1cab758990ee5dd type=CONTAINER_CREATED_EVENT Jul 16 00:09:24.213929 containerd[1964]: time="2025-07-16T00:09:24.213760157Z" level=warning msg="container event discarded" container=7dcdc04aca38c675f83dcc3e8a31c3851b47b4da5107acbaf1cab758990ee5dd type=CONTAINER_STARTED_EVENT Jul 16 00:09:25.628420 containerd[1964]: time="2025-07-16T00:09:25.628298012Z" level=warning msg="container event discarded" container=34a4b777ee78ae72688be2a1f5e20ada47f66b36567d07f5f4b66f0d38fd70bc type=CONTAINER_CREATED_EVENT Jul 16 00:09:25.680916 containerd[1964]: time="2025-07-16T00:09:25.680756194Z" level=warning msg="container event discarded" container=34a4b777ee78ae72688be2a1f5e20ada47f66b36567d07f5f4b66f0d38fd70bc type=CONTAINER_STARTED_EVENT Jul 16 00:09:26.054945 containerd[1964]: time="2025-07-16T00:09:26.054803513Z" level=warning msg="container event discarded" container=3f8c5cbbcc9ceeb72e624fd6b848d00b2506b81f083de63f3d8a3a4048c6bbe1 type=CONTAINER_CREATED_EVENT Jul 16 00:09:26.113416 containerd[1964]: time="2025-07-16T00:09:26.113270321Z" level=warning msg="container event discarded" container=3f8c5cbbcc9ceeb72e624fd6b848d00b2506b81f083de63f3d8a3a4048c6bbe1 type=CONTAINER_STARTED_EVENT Jul 16 00:09:27.117959 containerd[1964]: time="2025-07-16T00:09:27.117931630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"c732564b4583465095ad7f485a7b4add3783307188dd8e8d28f3c4650f623b37\" pid:7775 exited_at:{seconds:1752624567 nanos:117684910}" Jul 16 00:09:39.995107 containerd[1964]: time="2025-07-16T00:09:39.995039917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"cb6006f6462d4ff9f95f6187ee61756add4312afd54ed4a749993690c4ad1290\" pid:7815 exited_at:{seconds:1752624579 nanos:994727360}" Jul 16 00:09:41.758887 containerd[1964]: time="2025-07-16T00:09:41.758859594Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"8f9b8920c436c5928f7920afde28acc5608c08e173e9fc094eb7bb3598d10cca\" pid:7853 exited_at:{seconds:1752624581 nanos:758706083}" Jul 16 00:09:43.936986 containerd[1964]: time="2025-07-16T00:09:43.936928012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"0b0b9ea578febfd79d4c292698e0653b8d814a99df7d0ac40f4ef8bc9dfeb305\" pid:7875 exited_at:{seconds:1752624583 nanos:936584051}" Jul 16 00:09:57.140458 containerd[1964]: time="2025-07-16T00:09:57.140428734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"75446142832f25cc6f1253a75b98713ebc627d0d766a6390cd5a36062b401ac9\" pid:7912 exited_at:{seconds:1752624597 nanos:140182104}" Jul 16 00:09:57.223838 systemd[1]: Started sshd@14-147.75.203.227:22-190.128.241.2:55644.service - OpenSSH per-connection server daemon (190.128.241.2:55644). Jul 16 00:09:58.545419 sshd[7935]: Received disconnect from 190.128.241.2 port 55644:11: Bye Bye [preauth] Jul 16 00:09:58.545419 sshd[7935]: Disconnected from authenticating user root 190.128.241.2 port 55644 [preauth] Jul 16 00:09:58.548873 systemd[1]: sshd@14-147.75.203.227:22-190.128.241.2:55644.service: Deactivated successfully. Jul 16 00:10:11.808224 containerd[1964]: time="2025-07-16T00:10:11.808190328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"4cd8cd5c1423527c3feb360e6f3b1cd9ab52b61ac9ecc775dfb2c2a9debbb4b5\" pid:7950 exited_at:{seconds:1752624611 nanos:808024541}" Jul 16 00:10:13.879377 containerd[1964]: time="2025-07-16T00:10:13.879333887Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"deee773d086a531a8df311ec673686ea0f10fd0359107bb73c12c3cf557d8727\" pid:7973 exited_at:{seconds:1752624613 nanos:879063281}" Jul 16 00:10:19.057198 containerd[1964]: time="2025-07-16T00:10:19.057165465Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"6819e0c502457c9cbc8ad44d387063e50949ea4bfb9f3e34e0b31228d9f1b51d\" pid:8011 exited_at:{seconds:1752624619 nanos:57039442}" Jul 16 00:10:27.129481 containerd[1964]: time="2025-07-16T00:10:27.129441726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"8ee5805940740227cc1a29bf2aab0dc201555d44f590e51d60a8ce86e182d980\" pid:8050 exited_at:{seconds:1752624627 nanos:129114035}" Jul 16 00:10:39.988804 containerd[1964]: time="2025-07-16T00:10:39.988711897Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"52961f4087fba0dd454a303317590207fb7fcf0bb2785a052558cb9efa609da4\" pid:8089 exited_at:{seconds:1752624639 nanos:988470294}" Jul 16 00:10:41.799481 containerd[1964]: time="2025-07-16T00:10:41.799450463Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"fcf2443f3935d69c4769dcab48a609290fbd5628c49309e29f606ce986191f2a\" pid:8122 exited_at:{seconds:1752624641 nanos:799274368}" Jul 16 00:10:43.878025 containerd[1964]: time="2025-07-16T00:10:43.877967339Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"42aa440e8f88fb57bf3a46f12ddbcdc296785a103819a96a83985863d565b0a6\" pid:8145 exited_at:{seconds:1752624643 nanos:877730927}" Jul 16 00:10:53.353154 systemd[1]: Started sshd@15-147.75.203.227:22-147.75.109.163:57498.service - OpenSSH per-connection server daemon (147.75.109.163:57498). Jul 16 00:10:53.451554 sshd[8174]: Accepted publickey for core from 147.75.109.163 port 57498 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:10:53.452400 sshd-session[8174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:10:53.455610 systemd-logind[1947]: New session 12 of user core. Jul 16 00:10:53.471880 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 16 00:10:53.600673 sshd[8176]: Connection closed by 147.75.109.163 port 57498 Jul 16 00:10:53.600816 sshd-session[8174]: pam_unix(sshd:session): session closed for user core Jul 16 00:10:53.602482 systemd[1]: sshd@15-147.75.203.227:22-147.75.109.163:57498.service: Deactivated successfully. Jul 16 00:10:53.603463 systemd[1]: session-12.scope: Deactivated successfully. Jul 16 00:10:53.604430 systemd-logind[1947]: Session 12 logged out. Waiting for processes to exit. Jul 16 00:10:53.605017 systemd-logind[1947]: Removed session 12. Jul 16 00:10:57.158675 containerd[1964]: time="2025-07-16T00:10:57.158651455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"cf89c4c580eb49ed7e157f237b95f47de4c9fceffce998d48649f370ca149d20\" pid:8217 exited_at:{seconds:1752624657 nanos:158413719}" Jul 16 00:10:58.618785 systemd[1]: Started sshd@16-147.75.203.227:22-147.75.109.163:33318.service - OpenSSH per-connection server daemon (147.75.109.163:33318). Jul 16 00:10:58.672873 sshd[8240]: Accepted publickey for core from 147.75.109.163 port 33318 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:10:58.673666 sshd-session[8240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:10:58.677014 systemd-logind[1947]: New session 13 of user core. Jul 16 00:10:58.688913 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 16 00:10:58.774918 sshd[8242]: Connection closed by 147.75.109.163 port 33318 Jul 16 00:10:58.775117 sshd-session[8240]: pam_unix(sshd:session): session closed for user core Jul 16 00:10:58.777287 systemd[1]: sshd@16-147.75.203.227:22-147.75.109.163:33318.service: Deactivated successfully. Jul 16 00:10:58.778269 systemd[1]: session-13.scope: Deactivated successfully. Jul 16 00:10:58.778685 systemd-logind[1947]: Session 13 logged out. Waiting for processes to exit. Jul 16 00:10:58.779223 systemd-logind[1947]: Removed session 13. Jul 16 00:11:03.797773 systemd[1]: Started sshd@17-147.75.203.227:22-147.75.109.163:33326.service - OpenSSH per-connection server daemon (147.75.109.163:33326). Jul 16 00:11:03.851787 sshd[8269]: Accepted publickey for core from 147.75.109.163 port 33326 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:11:03.852404 sshd-session[8269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:11:03.855344 systemd-logind[1947]: New session 14 of user core. Jul 16 00:11:03.871017 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 16 00:11:03.956745 sshd[8271]: Connection closed by 147.75.109.163 port 33326 Jul 16 00:11:03.956935 sshd-session[8269]: pam_unix(sshd:session): session closed for user core Jul 16 00:11:03.973929 systemd[1]: sshd@17-147.75.203.227:22-147.75.109.163:33326.service: Deactivated successfully. Jul 16 00:11:03.974851 systemd[1]: session-14.scope: Deactivated successfully. Jul 16 00:11:03.975344 systemd-logind[1947]: Session 14 logged out. Waiting for processes to exit. Jul 16 00:11:03.976503 systemd[1]: Started sshd@18-147.75.203.227:22-147.75.109.163:33338.service - OpenSSH per-connection server daemon (147.75.109.163:33338). Jul 16 00:11:03.977117 systemd-logind[1947]: Removed session 14. Jul 16 00:11:04.041784 sshd[8296]: Accepted publickey for core from 147.75.109.163 port 33338 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:11:04.043158 sshd-session[8296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:11:04.048711 systemd-logind[1947]: New session 15 of user core. Jul 16 00:11:04.058930 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 16 00:11:04.162630 sshd[8298]: Connection closed by 147.75.109.163 port 33338 Jul 16 00:11:04.162820 sshd-session[8296]: pam_unix(sshd:session): session closed for user core Jul 16 00:11:04.175018 systemd[1]: sshd@18-147.75.203.227:22-147.75.109.163:33338.service: Deactivated successfully. Jul 16 00:11:04.176004 systemd[1]: session-15.scope: Deactivated successfully. Jul 16 00:11:04.176460 systemd-logind[1947]: Session 15 logged out. Waiting for processes to exit. Jul 16 00:11:04.177563 systemd[1]: Started sshd@19-147.75.203.227:22-147.75.109.163:33352.service - OpenSSH per-connection server daemon (147.75.109.163:33352). Jul 16 00:11:04.178155 systemd-logind[1947]: Removed session 15. Jul 16 00:11:04.210100 sshd[8321]: Accepted publickey for core from 147.75.109.163 port 33352 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:11:04.210746 sshd-session[8321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:11:04.213567 systemd-logind[1947]: New session 16 of user core. Jul 16 00:11:04.225988 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 16 00:11:04.400844 sshd[8323]: Connection closed by 147.75.109.163 port 33352 Jul 16 00:11:04.401010 sshd-session[8321]: pam_unix(sshd:session): session closed for user core Jul 16 00:11:04.403053 systemd[1]: sshd@19-147.75.203.227:22-147.75.109.163:33352.service: Deactivated successfully. Jul 16 00:11:04.404242 systemd[1]: session-16.scope: Deactivated successfully. Jul 16 00:11:04.405154 systemd-logind[1947]: Session 16 logged out. Waiting for processes to exit. Jul 16 00:11:04.405904 systemd-logind[1947]: Removed session 16. Jul 16 00:11:09.424684 systemd[1]: Started sshd@20-147.75.203.227:22-147.75.109.163:41338.service - OpenSSH per-connection server daemon (147.75.109.163:41338). Jul 16 00:11:09.476758 sshd[8352]: Accepted publickey for core from 147.75.109.163 port 41338 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:11:09.477565 sshd-session[8352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:11:09.480853 systemd-logind[1947]: New session 17 of user core. Jul 16 00:11:09.490043 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 16 00:11:09.576002 sshd[8354]: Connection closed by 147.75.109.163 port 41338 Jul 16 00:11:09.576210 sshd-session[8352]: pam_unix(sshd:session): session closed for user core Jul 16 00:11:09.578387 systemd[1]: sshd@20-147.75.203.227:22-147.75.109.163:41338.service: Deactivated successfully. Jul 16 00:11:09.579356 systemd[1]: session-17.scope: Deactivated successfully. Jul 16 00:11:09.579758 systemd-logind[1947]: Session 17 logged out. Waiting for processes to exit. Jul 16 00:11:09.580380 systemd-logind[1947]: Removed session 17. Jul 16 00:11:11.807872 containerd[1964]: time="2025-07-16T00:11:11.807841454Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"a966f71bf025d5d0aa06377f696ff4a615b33e407925ce7c5354cd1a75c60cc9\" pid:8391 exited_at:{seconds:1752624671 nanos:807649881}" Jul 16 00:11:13.882922 containerd[1964]: time="2025-07-16T00:11:13.882857796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"6d3fddd4df5478622b9018cc4db7bbc858b53e445025824a04e1be27b0b052e9\" pid:8413 exit_status:1 exited_at:{seconds:1752624673 nanos:882609564}" Jul 16 00:11:14.587169 systemd[1]: Started sshd@21-147.75.203.227:22-147.75.109.163:41348.service - OpenSSH per-connection server daemon (147.75.109.163:41348). Jul 16 00:11:14.619703 sshd[8438]: Accepted publickey for core from 147.75.109.163 port 41348 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:11:14.620389 sshd-session[8438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:11:14.623306 systemd-logind[1947]: New session 18 of user core. Jul 16 00:11:14.644074 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 16 00:11:14.733746 sshd[8440]: Connection closed by 147.75.109.163 port 41348 Jul 16 00:11:14.733930 sshd-session[8438]: pam_unix(sshd:session): session closed for user core Jul 16 00:11:14.735829 systemd[1]: sshd@21-147.75.203.227:22-147.75.109.163:41348.service: Deactivated successfully. Jul 16 00:11:14.736890 systemd[1]: session-18.scope: Deactivated successfully. Jul 16 00:11:14.737819 systemd-logind[1947]: Session 18 logged out. Waiting for processes to exit. Jul 16 00:11:14.738614 systemd-logind[1947]: Removed session 18. Jul 16 00:11:19.088576 containerd[1964]: time="2025-07-16T00:11:19.088515015Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"f9674fa083aeade622f30cbe46565a599671a830444cb2f013b3deb3bf6f93c5\" pid:8479 exited_at:{seconds:1752624679 nanos:88328386}" Jul 16 00:11:19.755742 systemd[1]: Started sshd@22-147.75.203.227:22-147.75.109.163:38084.service - OpenSSH per-connection server daemon (147.75.109.163:38084). Jul 16 00:11:19.798981 sshd[8490]: Accepted publickey for core from 147.75.109.163 port 38084 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:11:19.799594 sshd-session[8490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:11:19.802611 systemd-logind[1947]: New session 19 of user core. Jul 16 00:11:19.815894 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 16 00:11:19.903941 sshd[8492]: Connection closed by 147.75.109.163 port 38084 Jul 16 00:11:19.904125 sshd-session[8490]: pam_unix(sshd:session): session closed for user core Jul 16 00:11:19.905956 systemd[1]: sshd@22-147.75.203.227:22-147.75.109.163:38084.service: Deactivated successfully. Jul 16 00:11:19.907026 systemd[1]: session-19.scope: Deactivated successfully. Jul 16 00:11:19.907746 systemd-logind[1947]: Session 19 logged out. Waiting for processes to exit. Jul 16 00:11:19.908491 systemd-logind[1947]: Removed session 19. Jul 16 00:11:24.934652 systemd[1]: Started sshd@23-147.75.203.227:22-147.75.109.163:38094.service - OpenSSH per-connection server daemon (147.75.109.163:38094). Jul 16 00:11:24.979616 sshd[8516]: Accepted publickey for core from 147.75.109.163 port 38094 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:11:24.980284 sshd-session[8516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:11:24.983204 systemd-logind[1947]: New session 20 of user core. Jul 16 00:11:24.992866 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 16 00:11:25.121882 sshd[8518]: Connection closed by 147.75.109.163 port 38094 Jul 16 00:11:25.122069 sshd-session[8516]: pam_unix(sshd:session): session closed for user core Jul 16 00:11:25.145103 systemd[1]: sshd@23-147.75.203.227:22-147.75.109.163:38094.service: Deactivated successfully. Jul 16 00:11:25.149417 systemd[1]: session-20.scope: Deactivated successfully. Jul 16 00:11:25.151674 systemd-logind[1947]: Session 20 logged out. Waiting for processes to exit. Jul 16 00:11:25.158128 systemd[1]: Started sshd@24-147.75.203.227:22-147.75.109.163:38110.service - OpenSSH per-connection server daemon (147.75.109.163:38110). Jul 16 00:11:25.160107 systemd-logind[1947]: Removed session 20. Jul 16 00:11:25.255904 sshd[8543]: Accepted publickey for core from 147.75.109.163 port 38110 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:11:25.256610 sshd-session[8543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:11:25.259414 systemd-logind[1947]: New session 21 of user core. Jul 16 00:11:25.272929 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 16 00:11:25.562420 sshd[8545]: Connection closed by 147.75.109.163 port 38110 Jul 16 00:11:25.563096 sshd-session[8543]: pam_unix(sshd:session): session closed for user core Jul 16 00:11:25.596883 systemd[1]: sshd@24-147.75.203.227:22-147.75.109.163:38110.service: Deactivated successfully. Jul 16 00:11:25.601275 systemd[1]: session-21.scope: Deactivated successfully. Jul 16 00:11:25.603670 systemd-logind[1947]: Session 21 logged out. Waiting for processes to exit. Jul 16 00:11:25.610180 systemd[1]: Started sshd@25-147.75.203.227:22-147.75.109.163:38120.service - OpenSSH per-connection server daemon (147.75.109.163:38120). Jul 16 00:11:25.612092 systemd-logind[1947]: Removed session 21. Jul 16 00:11:25.705132 sshd[8567]: Accepted publickey for core from 147.75.109.163 port 38120 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:11:25.705836 sshd-session[8567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:11:25.708705 systemd-logind[1947]: New session 22 of user core. Jul 16 00:11:25.721843 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 16 00:11:26.786328 sshd[8569]: Connection closed by 147.75.109.163 port 38120 Jul 16 00:11:26.786613 sshd-session[8567]: pam_unix(sshd:session): session closed for user core Jul 16 00:11:26.813552 systemd[1]: sshd@25-147.75.203.227:22-147.75.109.163:38120.service: Deactivated successfully. Jul 16 00:11:26.815355 systemd[1]: session-22.scope: Deactivated successfully. Jul 16 00:11:26.815488 systemd[1]: session-22.scope: Consumed 486ms CPU time, 78M memory peak. Jul 16 00:11:26.815809 systemd-logind[1947]: Session 22 logged out. Waiting for processes to exit. Jul 16 00:11:26.817366 systemd[1]: Started sshd@26-147.75.203.227:22-147.75.109.163:38128.service - OpenSSH per-connection server daemon (147.75.109.163:38128). Jul 16 00:11:26.817707 systemd-logind[1947]: Removed session 22. Jul 16 00:11:26.871402 sshd[8599]: Accepted publickey for core from 147.75.109.163 port 38128 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:11:26.872098 sshd-session[8599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:11:26.875029 systemd-logind[1947]: New session 23 of user core. Jul 16 00:11:26.885943 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 16 00:11:27.079354 sshd[8603]: Connection closed by 147.75.109.163 port 38128 Jul 16 00:11:27.079599 sshd-session[8599]: pam_unix(sshd:session): session closed for user core Jul 16 00:11:27.087471 systemd[1]: sshd@26-147.75.203.227:22-147.75.109.163:38128.service: Deactivated successfully. Jul 16 00:11:27.088592 systemd[1]: session-23.scope: Deactivated successfully. Jul 16 00:11:27.089130 systemd-logind[1947]: Session 23 logged out. Waiting for processes to exit. Jul 16 00:11:27.090753 systemd[1]: Started sshd@27-147.75.203.227:22-147.75.109.163:38130.service - OpenSSH per-connection server daemon (147.75.109.163:38130). Jul 16 00:11:27.091181 systemd-logind[1947]: Removed session 23. Jul 16 00:11:27.126713 containerd[1964]: time="2025-07-16T00:11:27.126688109Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"f8f5c66163c1bcfd1ce13eb250077880f1dcdf21c69f21bdeacdefa72f485696\" pid:8634 exited_at:{seconds:1752624687 nanos:126516115}" Jul 16 00:11:27.133507 sshd[8644]: Accepted publickey for core from 147.75.109.163 port 38130 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:11:27.134326 sshd-session[8644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:11:27.137264 systemd-logind[1947]: New session 24 of user core. Jul 16 00:11:27.157220 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 16 00:11:27.289390 sshd[8663]: Connection closed by 147.75.109.163 port 38130 Jul 16 00:11:27.289584 sshd-session[8644]: pam_unix(sshd:session): session closed for user core Jul 16 00:11:27.291593 systemd[1]: sshd@27-147.75.203.227:22-147.75.109.163:38130.service: Deactivated successfully. Jul 16 00:11:27.292730 systemd[1]: session-24.scope: Deactivated successfully. Jul 16 00:11:27.293839 systemd-logind[1947]: Session 24 logged out. Waiting for processes to exit. Jul 16 00:11:27.294684 systemd-logind[1947]: Removed session 24. Jul 16 00:11:27.443843 systemd[1]: Started sshd@28-147.75.203.227:22-190.128.241.2:46344.service - OpenSSH per-connection server daemon (190.128.241.2:46344). Jul 16 00:11:28.819083 sshd[8690]: Received disconnect from 190.128.241.2 port 46344:11: Bye Bye [preauth] Jul 16 00:11:28.819083 sshd[8690]: Disconnected from authenticating user root 190.128.241.2 port 46344 [preauth] Jul 16 00:11:28.822846 systemd[1]: sshd@28-147.75.203.227:22-190.128.241.2:46344.service: Deactivated successfully. Jul 16 00:11:32.312224 systemd[1]: Started sshd@29-147.75.203.227:22-147.75.109.163:55350.service - OpenSSH per-connection server daemon (147.75.109.163:55350). Jul 16 00:11:32.358910 sshd[8698]: Accepted publickey for core from 147.75.109.163 port 55350 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:11:32.359560 sshd-session[8698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:11:32.362586 systemd-logind[1947]: New session 25 of user core. Jul 16 00:11:32.374865 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 16 00:11:32.452758 sshd[8700]: Connection closed by 147.75.109.163 port 55350 Jul 16 00:11:32.452964 sshd-session[8698]: pam_unix(sshd:session): session closed for user core Jul 16 00:11:32.454952 systemd[1]: sshd@29-147.75.203.227:22-147.75.109.163:55350.service: Deactivated successfully. Jul 16 00:11:32.455949 systemd[1]: session-25.scope: Deactivated successfully. Jul 16 00:11:32.456679 systemd-logind[1947]: Session 25 logged out. Waiting for processes to exit. Jul 16 00:11:32.457507 systemd-logind[1947]: Removed session 25. Jul 16 00:11:37.473647 systemd[1]: Started sshd@30-147.75.203.227:22-147.75.109.163:55354.service - OpenSSH per-connection server daemon (147.75.109.163:55354). Jul 16 00:11:37.515736 sshd[8725]: Accepted publickey for core from 147.75.109.163 port 55354 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:11:37.516376 sshd-session[8725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:11:37.519716 systemd-logind[1947]: New session 26 of user core. Jul 16 00:11:37.536011 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 16 00:11:37.643896 sshd[8727]: Connection closed by 147.75.109.163 port 55354 Jul 16 00:11:37.644105 sshd-session[8725]: pam_unix(sshd:session): session closed for user core Jul 16 00:11:37.645717 systemd[1]: sshd@30-147.75.203.227:22-147.75.109.163:55354.service: Deactivated successfully. Jul 16 00:11:37.646785 systemd[1]: session-26.scope: Deactivated successfully. Jul 16 00:11:37.647482 systemd-logind[1947]: Session 26 logged out. Waiting for processes to exit. Jul 16 00:11:37.648098 systemd-logind[1947]: Removed session 26. Jul 16 00:11:40.020713 containerd[1964]: time="2025-07-16T00:11:40.020684776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"374c5a31dfa6faa4985363ae14ce3e93668122a47a43ecaf289667bc6c811cd9\" id:\"5cb591392b95442c4626e9130267208a58b01390af2dc0b7e761c6b2efdbec88\" pid:8762 exited_at:{seconds:1752624700 nanos:20427522}" Jul 16 00:11:41.818692 containerd[1964]: time="2025-07-16T00:11:41.818659332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83b3ba939394f8643be2637769b31412db0c6810a1db3a4e388a53f8ff70ae5e\" id:\"a791a55c84f8ed3fc9ceb37c34b48d154becdaf23f5cb21d6c0e73fc328848fe\" pid:8796 exited_at:{seconds:1752624701 nanos:818498934}" Jul 16 00:11:42.657715 systemd[1]: Started sshd@31-147.75.203.227:22-147.75.109.163:33566.service - OpenSSH per-connection server daemon (147.75.109.163:33566). Jul 16 00:11:42.690747 sshd[8808]: Accepted publickey for core from 147.75.109.163 port 33566 ssh2: RSA SHA256:fZrzoayed5b4K5BhAhcpFAkec9IHprc+NMqVGejEH7o Jul 16 00:11:42.691419 sshd-session[8808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 16 00:11:42.694357 systemd-logind[1947]: New session 27 of user core. Jul 16 00:11:42.709948 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 16 00:11:42.796058 sshd[8810]: Connection closed by 147.75.109.163 port 33566 Jul 16 00:11:42.796252 sshd-session[8808]: pam_unix(sshd:session): session closed for user core Jul 16 00:11:42.798147 systemd[1]: sshd@31-147.75.203.227:22-147.75.109.163:33566.service: Deactivated successfully. Jul 16 00:11:42.799245 systemd[1]: session-27.scope: Deactivated successfully. Jul 16 00:11:42.800377 systemd-logind[1947]: Session 27 logged out. Waiting for processes to exit. Jul 16 00:11:42.801219 systemd-logind[1947]: Removed session 27. Jul 16 00:11:43.940857 containerd[1964]: time="2025-07-16T00:11:43.940790609Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2f88f398933433060abcd002673528fcbbd0f6bfcab56e33eaf23bb211601\" id:\"b3af4a7c2b945601d2b036d120fbb56b4fbf78b34b30118c43e17d85916cc80f\" pid:8848 exited_at:{seconds:1752624703 nanos:940554198}"