Nov 8 00:27:47.018735 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:27:47.018750 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:27:47.018757 kernel: BIOS-provided physical RAM map: Nov 8 00:27:47.018762 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 8 00:27:47.018766 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 8 00:27:47.018770 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 8 00:27:47.018775 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 8 00:27:47.018779 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 8 00:27:47.018783 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819c3fff] usable Nov 8 00:27:47.018787 kernel: BIOS-e820: [mem 0x00000000819c4000-0x00000000819c4fff] ACPI NVS Nov 8 00:27:47.018791 kernel: BIOS-e820: [mem 0x00000000819c5000-0x00000000819c5fff] reserved Nov 8 00:27:47.018796 kernel: BIOS-e820: [mem 0x00000000819c6000-0x000000008afcdfff] usable Nov 8 00:27:47.018801 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Nov 8 00:27:47.018805 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Nov 8 00:27:47.018810 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Nov 8 00:27:47.018815 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Nov 8 00:27:47.018821 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 8 00:27:47.018825 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 8 00:27:47.018830 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 8 00:27:47.018835 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 8 00:27:47.018840 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 8 00:27:47.018844 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 8 00:27:47.018849 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 8 00:27:47.018854 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 8 00:27:47.018858 kernel: NX (Execute Disable) protection: active Nov 8 00:27:47.018863 kernel: APIC: Static calls initialized Nov 8 00:27:47.018868 kernel: SMBIOS 3.2.1 present. Nov 8 00:27:47.018873 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 2.6 12/05/2024 Nov 8 00:27:47.018878 kernel: tsc: Detected 3400.000 MHz processor Nov 8 00:27:47.018883 kernel: tsc: Detected 3399.906 MHz TSC Nov 8 00:27:47.018888 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:27:47.018893 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:27:47.018898 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 8 00:27:47.018903 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Nov 8 00:27:47.018908 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:27:47.018913 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 8 00:27:47.018918 kernel: Using GB pages for direct mapping Nov 8 00:27:47.018923 kernel: ACPI: Early table checksum verification disabled Nov 8 00:27:47.018928 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 8 00:27:47.018933 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 8 00:27:47.018940 kernel: ACPI: FACP 0x000000008C58B5F0 000114 (v06 01072009 AMI 00010013) Nov 8 00:27:47.018945 kernel: ACPI: DSDT 0x000000008C54F268 03C386 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 8 00:27:47.018951 kernel: ACPI: FACS 0x000000008C66DF80 000040 Nov 8 00:27:47.018956 kernel: ACPI: APIC 0x000000008C58B708 00012C (v04 01072009 AMI 00010013) Nov 8 00:27:47.018962 kernel: ACPI: FPDT 0x000000008C58B838 000044 (v01 01072009 AMI 00010013) Nov 8 00:27:47.018967 kernel: ACPI: FIDT 0x000000008C58B880 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 8 00:27:47.018972 kernel: ACPI: MCFG 0x000000008C58B920 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 8 00:27:47.018977 kernel: ACPI: SPMI 0x000000008C58B960 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 8 00:27:47.018983 kernel: ACPI: SSDT 0x000000008C58B9A8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 8 00:27:47.018988 kernel: ACPI: SSDT 0x000000008C58D4C8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 8 00:27:47.018993 kernel: ACPI: SSDT 0x000000008C590690 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 8 00:27:47.018999 kernel: ACPI: HPET 0x000000008C5929C0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 00:27:47.019004 kernel: ACPI: SSDT 0x000000008C5929F8 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 8 00:27:47.019009 kernel: ACPI: SSDT 0x000000008C5939A8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 8 00:27:47.019014 kernel: ACPI: UEFI 0x000000008C5942A0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 00:27:47.019019 kernel: ACPI: LPIT 0x000000008C5942E8 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 00:27:47.019024 kernel: ACPI: SSDT 0x000000008C594380 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 8 00:27:47.019030 kernel: ACPI: SSDT 0x000000008C596B60 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 8 00:27:47.019035 kernel: ACPI: DBGP 0x000000008C598048 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 00:27:47.019040 kernel: ACPI: DBG2 0x000000008C598080 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 8 00:27:47.019046 kernel: ACPI: SSDT 0x000000008C5980D8 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 8 00:27:47.019051 kernel: ACPI: DMAR 0x000000008C599C40 000070 (v01 INTEL EDK2 00000002 01000013) Nov 8 00:27:47.019056 kernel: ACPI: SSDT 0x000000008C599CB0 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 8 00:27:47.019061 kernel: ACPI: TPM2 0x000000008C599DF8 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 8 00:27:47.019066 kernel: ACPI: SSDT 0x000000008C599E30 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 8 00:27:47.019071 kernel: ACPI: WSMT 0x000000008C59ABC0 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 8 00:27:47.019077 kernel: ACPI: EINJ 0x000000008C59ABE8 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 8 00:27:47.019082 kernel: ACPI: ERST 0x000000008C59AD18 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 8 00:27:47.019088 kernel: ACPI: BERT 0x000000008C59AF48 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 8 00:27:47.019093 kernel: ACPI: HEST 0x000000008C59AF78 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 8 00:27:47.019098 kernel: ACPI: SSDT 0x000000008C59B1F8 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 8 00:27:47.019103 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b5f0-0x8c58b703] Nov 8 00:27:47.019108 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b5ed] Nov 8 00:27:47.019113 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Nov 8 00:27:47.019118 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b708-0x8c58b833] Nov 8 00:27:47.019124 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b838-0x8c58b87b] Nov 8 00:27:47.019129 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b880-0x8c58b91b] Nov 8 00:27:47.019135 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b920-0x8c58b95b] Nov 8 00:27:47.019140 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b960-0x8c58b9a0] Nov 8 00:27:47.019145 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58b9a8-0x8c58d4c3] Nov 8 00:27:47.019150 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d4c8-0x8c59068d] Nov 8 00:27:47.019155 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590690-0x8c5929ba] Nov 8 00:27:47.019160 kernel: ACPI: Reserving HPET table memory at [mem 0x8c5929c0-0x8c5929f7] Nov 8 00:27:47.019165 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5929f8-0x8c5939a5] Nov 8 00:27:47.019170 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5939a8-0x8c59429e] Nov 8 00:27:47.019175 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c5942a0-0x8c5942e1] Nov 8 00:27:47.019181 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c5942e8-0x8c59437b] Nov 8 00:27:47.019186 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594380-0x8c596b5d] Nov 8 00:27:47.019191 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596b60-0x8c598041] Nov 8 00:27:47.019196 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c598048-0x8c59807b] Nov 8 00:27:47.019201 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598080-0x8c5980d3] Nov 8 00:27:47.019206 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5980d8-0x8c599c3e] Nov 8 00:27:47.019211 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599c40-0x8c599caf] Nov 8 00:27:47.019217 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599cb0-0x8c599df3] Nov 8 00:27:47.019222 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599df8-0x8c599e2b] Nov 8 00:27:47.019228 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599e30-0x8c59abbe] Nov 8 00:27:47.019233 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59abc0-0x8c59abe7] Nov 8 00:27:47.019238 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59abe8-0x8c59ad17] Nov 8 00:27:47.019245 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad18-0x8c59af47] Nov 8 00:27:47.019250 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59af48-0x8c59af77] Nov 8 00:27:47.019255 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59af78-0x8c59b1f3] Nov 8 00:27:47.019279 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b1f8-0x8c59b359] Nov 8 00:27:47.019284 kernel: No NUMA configuration found Nov 8 00:27:47.019304 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 8 00:27:47.019309 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 8 00:27:47.019315 kernel: Zone ranges: Nov 8 00:27:47.019320 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:27:47.019325 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:27:47.019330 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 8 00:27:47.019336 kernel: Movable zone start for each node Nov 8 00:27:47.019341 kernel: Early memory node ranges Nov 8 00:27:47.019346 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 8 00:27:47.019351 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 8 00:27:47.019356 kernel: node 0: [mem 0x0000000040400000-0x00000000819c3fff] Nov 8 00:27:47.019362 kernel: node 0: [mem 0x00000000819c6000-0x000000008afcdfff] Nov 8 00:27:47.019367 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Nov 8 00:27:47.019372 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 8 00:27:47.019377 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 8 00:27:47.019386 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 8 00:27:47.019392 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:27:47.019398 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 8 00:27:47.019403 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 8 00:27:47.019410 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 8 00:27:47.019415 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 8 00:27:47.019420 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Nov 8 00:27:47.019426 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 8 00:27:47.019431 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 8 00:27:47.019437 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 8 00:27:47.019442 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 8 00:27:47.019448 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 8 00:27:47.019453 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 8 00:27:47.019460 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 8 00:27:47.019465 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 8 00:27:47.019470 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 8 00:27:47.019476 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 8 00:27:47.019481 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 8 00:27:47.019487 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 8 00:27:47.019492 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 8 00:27:47.019497 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 8 00:27:47.019503 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 8 00:27:47.019509 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 8 00:27:47.019514 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 8 00:27:47.019520 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 8 00:27:47.019525 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 8 00:27:47.019531 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 8 00:27:47.019536 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:27:47.019541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:27:47.019547 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:27:47.019552 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:27:47.019559 kernel: TSC deadline timer available Nov 8 00:27:47.019564 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 8 00:27:47.019570 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 8 00:27:47.019575 kernel: Booting paravirtualized kernel on bare hardware Nov 8 00:27:47.019581 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:27:47.019587 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 8 00:27:47.019592 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 8 00:27:47.019597 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 8 00:27:47.019603 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 8 00:27:47.019610 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:27:47.019615 kernel: random: crng init done Nov 8 00:27:47.019621 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 8 00:27:47.019626 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 8 00:27:47.019632 kernel: Fallback order for Node 0: 0 Nov 8 00:27:47.019637 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Nov 8 00:27:47.019642 kernel: Policy zone: Normal Nov 8 00:27:47.019648 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:27:47.019654 kernel: software IO TLB: area num 16. Nov 8 00:27:47.019660 kernel: Memory: 32720308K/33452984K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 732416K reserved, 0K cma-reserved) Nov 8 00:27:47.019666 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 8 00:27:47.019671 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:27:47.019676 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:27:47.019682 kernel: Dynamic Preempt: voluntary Nov 8 00:27:47.019687 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:27:47.019695 kernel: rcu: RCU event tracing is enabled. Nov 8 00:27:47.019700 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 8 00:27:47.019707 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:27:47.019712 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:27:47.019718 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:27:47.019723 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:27:47.019729 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 8 00:27:47.019734 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 8 00:27:47.019739 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:27:47.019745 kernel: Console: colour dummy device 80x25 Nov 8 00:27:47.019750 kernel: printk: console [tty0] enabled Nov 8 00:27:47.019756 kernel: printk: console [ttyS1] enabled Nov 8 00:27:47.019763 kernel: ACPI: Core revision 20230628 Nov 8 00:27:47.019768 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Nov 8 00:27:47.019774 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:27:47.019779 kernel: DMAR: Host address width 39 Nov 8 00:27:47.019785 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 8 00:27:47.019790 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 8 00:27:47.019796 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Nov 8 00:27:47.019801 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 8 00:27:47.019807 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 8 00:27:47.019813 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 8 00:27:47.019819 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 8 00:27:47.019824 kernel: x2apic enabled Nov 8 00:27:47.019830 kernel: APIC: Switched APIC routing to: cluster x2apic Nov 8 00:27:47.019835 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:27:47.019840 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 8 00:27:47.019846 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 8 00:27:47.019852 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 8 00:27:47.019857 kernel: process: using mwait in idle threads Nov 8 00:27:47.019864 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:27:47.019869 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:27:47.019875 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:27:47.019880 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 8 00:27:47.019885 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 8 00:27:47.019891 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 8 00:27:47.019896 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 8 00:27:47.019902 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 8 00:27:47.019907 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:27:47.019914 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:27:47.019919 kernel: TAA: Mitigation: TSX disabled Nov 8 00:27:47.019925 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 8 00:27:47.019930 kernel: SRBDS: Mitigation: Microcode Nov 8 00:27:47.019936 kernel: GDS: Mitigation: Microcode Nov 8 00:27:47.019941 kernel: active return thunk: its_return_thunk Nov 8 00:27:47.019946 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:27:47.019952 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Nov 8 00:27:47.019957 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:27:47.019964 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:27:47.019969 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:27:47.019975 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 8 00:27:47.019980 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 8 00:27:47.019986 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:27:47.019991 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 8 00:27:47.019997 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 8 00:27:47.020002 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 8 00:27:47.020008 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:27:47.020014 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:27:47.020019 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:27:47.020025 kernel: landlock: Up and running. Nov 8 00:27:47.020030 kernel: SELinux: Initializing. Nov 8 00:27:47.020036 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:27:47.020041 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:27:47.020047 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 8 00:27:47.020052 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 00:27:47.020059 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 00:27:47.020064 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 00:27:47.020070 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 8 00:27:47.020075 kernel: ... version: 4 Nov 8 00:27:47.020081 kernel: ... bit width: 48 Nov 8 00:27:47.020086 kernel: ... generic registers: 4 Nov 8 00:27:47.020092 kernel: ... value mask: 0000ffffffffffff Nov 8 00:27:47.020097 kernel: ... max period: 00007fffffffffff Nov 8 00:27:47.020103 kernel: ... fixed-purpose events: 3 Nov 8 00:27:47.020109 kernel: ... event mask: 000000070000000f Nov 8 00:27:47.020114 kernel: signal: max sigframe size: 2032 Nov 8 00:27:47.020120 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 8 00:27:47.020125 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:27:47.020131 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:27:47.020137 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 8 00:27:47.020142 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:27:47.020147 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:27:47.020153 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Nov 8 00:27:47.020159 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:27:47.020165 kernel: smp: Brought up 1 node, 16 CPUs Nov 8 00:27:47.020171 kernel: smpboot: Max logical packages: 1 Nov 8 00:27:47.020176 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 8 00:27:47.020181 kernel: devtmpfs: initialized Nov 8 00:27:47.020187 kernel: x86/mm: Memory block size: 128MB Nov 8 00:27:47.020192 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819c4000-0x819c4fff] (4096 bytes) Nov 8 00:27:47.020198 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Nov 8 00:27:47.020203 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:27:47.020210 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 8 00:27:47.020215 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:27:47.020221 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:27:47.020226 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:27:47.020232 kernel: audit: type=2000 audit(1762561661.119:1): state=initialized audit_enabled=0 res=1 Nov 8 00:27:47.020237 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:27:47.020244 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:27:47.020250 kernel: cpuidle: using governor menu Nov 8 00:27:47.020275 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:27:47.020281 kernel: dca service started, version 1.12.1 Nov 8 00:27:47.020287 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 8 00:27:47.020292 kernel: PCI: Using configuration type 1 for base access Nov 8 00:27:47.020311 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 8 00:27:47.020317 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:27:47.020322 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:27:47.020328 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:27:47.020333 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:27:47.020338 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:27:47.020345 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:27:47.020350 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:27:47.020356 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:27:47.020361 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 8 00:27:47.020367 kernel: ACPI: Dynamic OEM Table Load: Nov 8 00:27:47.020372 kernel: ACPI: SSDT 0xFFFF9B3181B33400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 8 00:27:47.020378 kernel: ACPI: Dynamic OEM Table Load: Nov 8 00:27:47.020383 kernel: ACPI: SSDT 0xFFFF9B3181B29000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 8 00:27:47.020389 kernel: ACPI: Dynamic OEM Table Load: Nov 8 00:27:47.020395 kernel: ACPI: SSDT 0xFFFF9B3180247F00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 8 00:27:47.020401 kernel: ACPI: Dynamic OEM Table Load: Nov 8 00:27:47.020406 kernel: ACPI: SSDT 0xFFFF9B3181E58800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 8 00:27:47.020412 kernel: ACPI: Dynamic OEM Table Load: Nov 8 00:27:47.020417 kernel: ACPI: SSDT 0xFFFF9B318012C000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 8 00:27:47.020422 kernel: ACPI: Dynamic OEM Table Load: Nov 8 00:27:47.020428 kernel: ACPI: SSDT 0xFFFF9B3181B35000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 8 00:27:47.020433 kernel: ACPI: _OSC evaluated successfully for all CPUs Nov 8 00:27:47.020439 kernel: ACPI: Interpreter enabled Nov 8 00:27:47.020445 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:27:47.020451 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:27:47.020456 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 8 00:27:47.020462 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 8 00:27:47.020467 kernel: HEST: Table parsing has been initialized. Nov 8 00:27:47.020472 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 8 00:27:47.020478 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:27:47.020483 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 8 00:27:47.020489 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 8 00:27:47.020496 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Nov 8 00:27:47.020501 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Nov 8 00:27:47.020507 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Nov 8 00:27:47.020512 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Nov 8 00:27:47.020518 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Nov 8 00:27:47.020523 kernel: ACPI: \_TZ_.FN00: New power resource Nov 8 00:27:47.020529 kernel: ACPI: \_TZ_.FN01: New power resource Nov 8 00:27:47.020534 kernel: ACPI: \_TZ_.FN02: New power resource Nov 8 00:27:47.020539 kernel: ACPI: \_TZ_.FN03: New power resource Nov 8 00:27:47.020545 kernel: ACPI: \_TZ_.FN04: New power resource Nov 8 00:27:47.020551 kernel: ACPI: \PIN_: New power resource Nov 8 00:27:47.020557 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 8 00:27:47.020631 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:27:47.020686 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 8 00:27:47.020735 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 8 00:27:47.020743 kernel: PCI host bridge to bus 0000:00 Nov 8 00:27:47.020792 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:27:47.020840 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:27:47.020883 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:27:47.020927 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 8 00:27:47.020970 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 8 00:27:47.021014 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 8 00:27:47.021070 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 8 00:27:47.021131 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 8 00:27:47.021181 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.021237 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Nov 8 00:27:47.021323 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.021379 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 8 00:27:47.021428 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 8 00:27:47.021484 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 8 00:27:47.021534 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 8 00:27:47.021588 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 8 00:27:47.021637 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 8 00:27:47.021685 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 8 00:27:47.021738 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 8 00:27:47.021789 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 8 00:27:47.021839 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 8 00:27:47.021894 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 8 00:27:47.021944 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 8 00:27:47.021996 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 8 00:27:47.022047 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 8 00:27:47.022101 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 8 00:27:47.022159 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 8 00:27:47.022210 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 8 00:27:47.022299 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 8 00:27:47.022350 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 8 00:27:47.022399 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 8 00:27:47.022453 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 8 00:27:47.022504 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 8 00:27:47.022554 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 8 00:27:47.022605 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 8 00:27:47.022655 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 8 00:27:47.022703 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 8 00:27:47.022753 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 8 00:27:47.022804 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 8 00:27:47.022853 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 8 00:27:47.022902 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 8 00:27:47.022951 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 8 00:27:47.023008 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 8 00:27:47.023059 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.023116 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 8 00:27:47.023167 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.023221 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 8 00:27:47.023309 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.023363 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 8 00:27:47.023416 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.023470 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Nov 8 00:27:47.023519 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.023573 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 8 00:27:47.023621 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 8 00:27:47.023677 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 8 00:27:47.023735 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 8 00:27:47.023785 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 8 00:27:47.023834 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 8 00:27:47.023887 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 8 00:27:47.023936 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 8 00:27:47.023987 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:27:47.024044 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Nov 8 00:27:47.024097 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 8 00:27:47.024148 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 8 00:27:47.024199 kernel: pci 0000:02:00.0: PME# supported from D3cold Nov 8 00:27:47.024252 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 8 00:27:47.024337 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 8 00:27:47.024392 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Nov 8 00:27:47.024443 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 8 00:27:47.024497 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 8 00:27:47.024550 kernel: pci 0000:02:00.1: PME# supported from D3cold Nov 8 00:27:47.024600 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 8 00:27:47.024651 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 8 00:27:47.024701 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Nov 8 00:27:47.024751 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Nov 8 00:27:47.024799 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 8 00:27:47.024852 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Nov 8 00:27:47.024906 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 8 00:27:47.024958 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 8 00:27:47.025009 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 8 00:27:47.025060 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Nov 8 00:27:47.025112 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 8 00:27:47.025161 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.025215 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Nov 8 00:27:47.025291 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 8 00:27:47.025361 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 8 00:27:47.025415 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Nov 8 00:27:47.025466 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Nov 8 00:27:47.025518 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 8 00:27:47.025568 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Nov 8 00:27:47.025620 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 8 00:27:47.025671 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.025722 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Nov 8 00:27:47.025772 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 8 00:27:47.025821 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 8 00:27:47.025871 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Nov 8 00:27:47.025928 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Nov 8 00:27:47.025979 kernel: pci 0000:07:00.0: enabling Extended Tags Nov 8 00:27:47.026032 kernel: pci 0000:07:00.0: supports D1 D2 Nov 8 00:27:47.026083 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 00:27:47.026133 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Nov 8 00:27:47.026182 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Nov 8 00:27:47.026231 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Nov 8 00:27:47.026333 kernel: pci_bus 0000:08: extended config space not accessible Nov 8 00:27:47.026390 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Nov 8 00:27:47.026448 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 8 00:27:47.026501 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 8 00:27:47.026553 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Nov 8 00:27:47.026606 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:27:47.026658 kernel: pci 0000:08:00.0: supports D1 D2 Nov 8 00:27:47.026712 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 00:27:47.026764 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Nov 8 00:27:47.026815 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Nov 8 00:27:47.026869 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 8 00:27:47.026878 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 8 00:27:47.026884 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 8 00:27:47.026890 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 8 00:27:47.026895 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 8 00:27:47.026901 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 8 00:27:47.026907 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 8 00:27:47.026913 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 8 00:27:47.026919 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 8 00:27:47.026926 kernel: iommu: Default domain type: Translated Nov 8 00:27:47.026932 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:27:47.026937 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:27:47.026943 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:27:47.026949 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 8 00:27:47.026955 kernel: e820: reserve RAM buffer [mem 0x819c4000-0x83ffffff] Nov 8 00:27:47.026960 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Nov 8 00:27:47.026967 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Nov 8 00:27:47.026973 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 8 00:27:47.026979 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 8 00:27:47.027030 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Nov 8 00:27:47.027084 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Nov 8 00:27:47.027136 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:27:47.027144 kernel: vgaarb: loaded Nov 8 00:27:47.027150 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 8 00:27:47.027156 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Nov 8 00:27:47.027162 kernel: clocksource: Switched to clocksource tsc-early Nov 8 00:27:47.027169 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:27:47.027175 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:27:47.027181 kernel: pnp: PnP ACPI init Nov 8 00:27:47.027232 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 8 00:27:47.027327 kernel: pnp 00:02: [dma 0 disabled] Nov 8 00:27:47.027378 kernel: pnp 00:03: [dma 0 disabled] Nov 8 00:27:47.027427 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 8 00:27:47.027475 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 8 00:27:47.027524 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Nov 8 00:27:47.027569 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Nov 8 00:27:47.027616 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Nov 8 00:27:47.027660 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Nov 8 00:27:47.027706 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 8 00:27:47.027751 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 8 00:27:47.027798 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 8 00:27:47.027843 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 8 00:27:47.027893 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Nov 8 00:27:47.027942 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 8 00:27:47.027988 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 8 00:27:47.028032 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 8 00:27:47.028078 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 8 00:27:47.028126 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 8 00:27:47.028171 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Nov 8 00:27:47.028220 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Nov 8 00:27:47.028229 kernel: pnp: PnP ACPI: found 9 devices Nov 8 00:27:47.028235 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:27:47.028243 kernel: NET: Registered PF_INET protocol family Nov 8 00:27:47.028271 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:27:47.028277 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 8 00:27:47.028304 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:27:47.028310 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:27:47.028316 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 8 00:27:47.028322 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 8 00:27:47.028328 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:27:47.028334 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:27:47.028339 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:27:47.028345 kernel: NET: Registered PF_XDP protocol family Nov 8 00:27:47.028397 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 8 00:27:47.028448 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 8 00:27:47.028497 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 8 00:27:47.028548 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:27:47.028599 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 8 00:27:47.028651 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 8 00:27:47.028702 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 8 00:27:47.028754 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 8 00:27:47.028806 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Nov 8 00:27:47.028857 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Nov 8 00:27:47.028906 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 8 00:27:47.028958 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Nov 8 00:27:47.029008 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Nov 8 00:27:47.029060 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 8 00:27:47.029109 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 8 00:27:47.029159 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Nov 8 00:27:47.029209 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 8 00:27:47.029282 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 8 00:27:47.029351 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Nov 8 00:27:47.029401 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Nov 8 00:27:47.029456 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Nov 8 00:27:47.029506 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 8 00:27:47.029558 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Nov 8 00:27:47.029607 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Nov 8 00:27:47.029657 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Nov 8 00:27:47.029703 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 8 00:27:47.029747 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:27:47.029792 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:27:47.029835 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:27:47.029879 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 8 00:27:47.029925 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 8 00:27:47.029975 kernel: pci_bus 0000:02: resource 1 [mem 0x95100000-0x952fffff] Nov 8 00:27:47.030021 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 8 00:27:47.030074 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Nov 8 00:27:47.030120 kernel: pci_bus 0000:04: resource 1 [mem 0x95400000-0x954fffff] Nov 8 00:27:47.030169 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 8 00:27:47.030217 kernel: pci_bus 0000:05: resource 1 [mem 0x95300000-0x953fffff] Nov 8 00:27:47.030311 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 8 00:27:47.030357 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 8 00:27:47.030406 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Nov 8 00:27:47.030452 kernel: pci_bus 0000:08: resource 1 [mem 0x94000000-0x950fffff] Nov 8 00:27:47.030460 kernel: PCI: CLS 64 bytes, default 64 Nov 8 00:27:47.030466 kernel: DMAR: No ATSR found Nov 8 00:27:47.030474 kernel: DMAR: No SATC found Nov 8 00:27:47.030480 kernel: DMAR: dmar0: Using Queued invalidation Nov 8 00:27:47.030530 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 8 00:27:47.030581 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 8 00:27:47.030631 kernel: pci 0000:00:01.1: Adding to iommu group 1 Nov 8 00:27:47.030680 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 8 00:27:47.030729 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 8 00:27:47.030778 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 8 00:27:47.030827 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 8 00:27:47.030879 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 8 00:27:47.030927 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 8 00:27:47.030976 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 8 00:27:47.031024 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 8 00:27:47.031074 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 8 00:27:47.031122 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 8 00:27:47.031172 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 8 00:27:47.031221 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 8 00:27:47.031321 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 8 00:27:47.031371 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 8 00:27:47.031422 kernel: pci 0000:00:1c.1: Adding to iommu group 12 Nov 8 00:27:47.031471 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 8 00:27:47.031521 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 8 00:27:47.031570 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 8 00:27:47.031618 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 8 00:27:47.031669 kernel: pci 0000:02:00.0: Adding to iommu group 1 Nov 8 00:27:47.031722 kernel: pci 0000:02:00.1: Adding to iommu group 1 Nov 8 00:27:47.031773 kernel: pci 0000:04:00.0: Adding to iommu group 15 Nov 8 00:27:47.031824 kernel: pci 0000:05:00.0: Adding to iommu group 16 Nov 8 00:27:47.031875 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 8 00:27:47.031927 kernel: pci 0000:08:00.0: Adding to iommu group 17 Nov 8 00:27:47.031935 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 8 00:27:47.031941 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:27:47.031947 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Nov 8 00:27:47.031955 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 8 00:27:47.031961 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 8 00:27:47.031966 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 8 00:27:47.031972 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 8 00:27:47.032025 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 8 00:27:47.032033 kernel: Initialise system trusted keyrings Nov 8 00:27:47.032039 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 8 00:27:47.032045 kernel: Key type asymmetric registered Nov 8 00:27:47.032052 kernel: Asymmetric key parser 'x509' registered Nov 8 00:27:47.032058 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:27:47.032064 kernel: io scheduler mq-deadline registered Nov 8 00:27:47.032070 kernel: io scheduler kyber registered Nov 8 00:27:47.032076 kernel: io scheduler bfq registered Nov 8 00:27:47.032125 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 8 00:27:47.032174 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 122 Nov 8 00:27:47.032224 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 123 Nov 8 00:27:47.032318 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 124 Nov 8 00:27:47.032370 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 125 Nov 8 00:27:47.032419 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 126 Nov 8 00:27:47.032469 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 127 Nov 8 00:27:47.032525 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 8 00:27:47.032533 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 8 00:27:47.032539 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 8 00:27:47.032545 kernel: pstore: Using crash dump compression: deflate Nov 8 00:27:47.032553 kernel: pstore: Registered erst as persistent store backend Nov 8 00:27:47.032559 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:27:47.032565 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:27:47.032570 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:27:47.032576 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 8 00:27:47.032628 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 8 00:27:47.032637 kernel: i8042: PNP: No PS/2 controller found. Nov 8 00:27:47.032681 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 8 00:27:47.032730 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 8 00:27:47.032775 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-08T00:27:45 UTC (1762561665) Nov 8 00:27:47.032821 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 8 00:27:47.032829 kernel: intel_pstate: Intel P-state driver initializing Nov 8 00:27:47.032835 kernel: intel_pstate: Disabling energy efficiency optimization Nov 8 00:27:47.032841 kernel: intel_pstate: HWP enabled Nov 8 00:27:47.032847 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 8 00:27:47.032853 kernel: vesafb: scrolling: redraw Nov 8 00:27:47.032859 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 8 00:27:47.032866 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000388354e9, using 768k, total 768k Nov 8 00:27:47.032872 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:27:47.032878 kernel: fb0: VESA VGA frame buffer device Nov 8 00:27:47.032884 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:27:47.032890 kernel: Segment Routing with IPv6 Nov 8 00:27:47.032895 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:27:47.032901 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:27:47.032907 kernel: Key type dns_resolver registered Nov 8 00:27:47.032913 kernel: microcode: Current revision: 0x00000102 Nov 8 00:27:47.032919 kernel: microcode: Microcode Update Driver: v2.2. Nov 8 00:27:47.032925 kernel: IPI shorthand broadcast: enabled Nov 8 00:27:47.032931 kernel: sched_clock: Marking stable (1661307766, 1363989216)->(4461564231, -1436267249) Nov 8 00:27:47.032937 kernel: registered taskstats version 1 Nov 8 00:27:47.032942 kernel: Loading compiled-in X.509 certificates Nov 8 00:27:47.032948 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:27:47.032954 kernel: Key type .fscrypt registered Nov 8 00:27:47.032960 kernel: Key type fscrypt-provisioning registered Nov 8 00:27:47.032965 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:27:47.032972 kernel: ima: No architecture policies found Nov 8 00:27:47.032978 kernel: clk: Disabling unused clocks Nov 8 00:27:47.032984 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:27:47.032989 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:27:47.032995 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:27:47.033001 kernel: Run /init as init process Nov 8 00:27:47.033007 kernel: with arguments: Nov 8 00:27:47.033012 kernel: /init Nov 8 00:27:47.033018 kernel: with environment: Nov 8 00:27:47.033025 kernel: HOME=/ Nov 8 00:27:47.033030 kernel: TERM=linux Nov 8 00:27:47.033037 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:27:47.033044 systemd[1]: Detected architecture x86-64. Nov 8 00:27:47.033051 systemd[1]: Running in initrd. Nov 8 00:27:47.033057 systemd[1]: No hostname configured, using default hostname. Nov 8 00:27:47.033063 systemd[1]: Hostname set to . Nov 8 00:27:47.033070 systemd[1]: Initializing machine ID from random generator. Nov 8 00:27:47.033076 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:27:47.033082 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:27:47.033088 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:27:47.033094 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:27:47.033101 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:27:47.033107 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:27:47.033113 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:27:47.033120 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:27:47.033127 kernel: tsc: Refined TSC clocksource calibration: 3407.974 MHz Nov 8 00:27:47.033133 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fbb0eafc, max_idle_ns: 440795256507 ns Nov 8 00:27:47.033139 kernel: clocksource: Switched to clocksource tsc Nov 8 00:27:47.033145 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:27:47.033151 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:27:47.033157 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:27:47.033164 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:27:47.033171 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:27:47.033177 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:27:47.033183 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:27:47.033189 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:27:47.033195 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:27:47.033201 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:27:47.033207 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:27:47.033213 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:27:47.033220 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:27:47.033226 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:27:47.033232 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:27:47.033238 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:27:47.033247 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:27:47.033253 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:27:47.033283 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:27:47.033290 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:27:47.033327 systemd-journald[267]: Collecting audit messages is disabled. Nov 8 00:27:47.033341 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:27:47.033348 systemd-journald[267]: Journal started Nov 8 00:27:47.033363 systemd-journald[267]: Runtime Journal (/run/log/journal/c473add8d6934c6ea00aa6871e10ab56) is 8.0M, max 639.9M, 631.9M free. Nov 8 00:27:47.066985 systemd-modules-load[268]: Inserted module 'overlay' Nov 8 00:27:47.076437 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:27:47.097247 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:27:47.097259 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:27:47.169474 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:27:47.169487 kernel: Bridge firewalling registered Nov 8 00:27:47.154431 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:27:47.159329 systemd-modules-load[268]: Inserted module 'br_netfilter' Nov 8 00:27:47.180511 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:27:47.191645 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:27:47.220609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:27:47.245696 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:27:47.252303 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:27:47.281743 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:27:47.289064 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:27:47.293192 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:27:47.294014 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:27:47.294676 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:27:47.299260 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:27:47.301101 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:27:47.303670 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:27:47.309701 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:27:47.322841 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:27:47.346990 systemd-resolved[299]: Positive Trust Anchors: Nov 8 00:27:47.346999 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:27:47.347040 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:27:47.349717 systemd-resolved[299]: Defaulting to hostname 'linux'. Nov 8 00:27:47.350524 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:27:47.356538 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:27:47.468361 dracut-cmdline[306]: dracut-dracut-053 Nov 8 00:27:47.468361 dracut-cmdline[306]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:27:47.549273 kernel: SCSI subsystem initialized Nov 8 00:27:47.572284 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:27:47.594273 kernel: iscsi: registered transport (tcp) Nov 8 00:27:47.627292 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:27:47.627309 kernel: QLogic iSCSI HBA Driver Nov 8 00:27:47.659736 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:27:47.679510 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:27:47.762316 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:27:47.762336 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:27:47.781949 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:27:47.841275 kernel: raid6: avx2x4 gen() 53273 MB/s Nov 8 00:27:47.873321 kernel: raid6: avx2x2 gen() 53564 MB/s Nov 8 00:27:47.909608 kernel: raid6: avx2x1 gen() 45259 MB/s Nov 8 00:27:47.909625 kernel: raid6: using algorithm avx2x2 gen() 53564 MB/s Nov 8 00:27:47.956677 kernel: raid6: .... xor() 31034 MB/s, rmw enabled Nov 8 00:27:47.956694 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:27:47.997274 kernel: xor: automatically using best checksumming function avx Nov 8 00:27:48.115288 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:27:48.121119 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:27:48.142406 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:27:48.149762 systemd-udevd[493]: Using default interface naming scheme 'v255'. Nov 8 00:27:48.153377 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:27:48.187435 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:27:48.214363 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Nov 8 00:27:48.229083 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:27:48.254491 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:27:48.346624 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:27:48.391545 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 8 00:27:48.391583 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 8 00:27:48.361620 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:27:48.416294 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:27:48.416311 kernel: PTP clock support registered Nov 8 00:27:48.396050 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:27:48.451053 kernel: libata version 3.00 loaded. Nov 8 00:27:48.451073 kernel: ACPI: bus type USB registered Nov 8 00:27:48.451081 kernel: usbcore: registered new interface driver usbfs Nov 8 00:27:48.396084 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:27:48.528308 kernel: usbcore: registered new interface driver hub Nov 8 00:27:48.528324 kernel: usbcore: registered new device driver usb Nov 8 00:27:48.528336 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:27:48.528344 kernel: AES CTR mode by8 optimization enabled Nov 8 00:27:48.528352 kernel: ahci 0000:00:17.0: version 3.0 Nov 8 00:27:48.490333 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:27:48.572352 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Nov 8 00:27:48.572438 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 8 00:27:48.547282 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:27:48.601801 kernel: scsi host0: ahci Nov 8 00:27:48.603466 kernel: scsi host1: ahci Nov 8 00:27:48.603565 kernel: scsi host2: ahci Nov 8 00:27:48.603585 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 8 00:27:48.547328 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:27:49.525018 kernel: scsi host3: ahci Nov 8 00:27:49.525175 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 8 00:27:49.525184 kernel: mlx5_core 0000:02:00.0: firmware version: 14.31.1014 Nov 8 00:27:49.525269 kernel: scsi host4: ahci Nov 8 00:27:49.525337 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 8 00:27:49.525405 kernel: scsi host5: ahci Nov 8 00:27:49.525470 kernel: igb 0000:04:00.0: added PHC on eth0 Nov 8 00:27:49.525539 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 8 00:27:49.525604 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d4 Nov 8 00:27:49.525666 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Nov 8 00:27:49.525730 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 8 00:27:49.525792 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 8 00:27:49.525856 kernel: igb 0000:05:00.0: added PHC on eth1 Nov 8 00:27:49.525920 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 8 00:27:49.525986 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d5 Nov 8 00:27:49.526053 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Nov 8 00:27:49.526119 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 8 00:27:49.526181 kernel: scsi host6: ahci Nov 8 00:27:49.526249 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 8 00:27:49.526362 kernel: scsi host7: ahci Nov 8 00:27:49.526426 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 8 00:27:49.526489 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 128 Nov 8 00:27:49.526500 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 8 00:27:49.526562 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Nov 8 00:27:49.526625 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 128 Nov 8 00:27:49.526634 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 8 00:27:49.526695 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 128 Nov 8 00:27:49.526703 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 8 00:27:49.526763 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 128 Nov 8 00:27:49.526772 kernel: hub 1-0:1.0: USB hub found Nov 8 00:27:49.526843 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 128 Nov 8 00:27:49.526852 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 128 Nov 8 00:27:49.526859 kernel: hub 1-0:1.0: 16 ports detected Nov 8 00:27:49.526921 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 128 Nov 8 00:27:49.526930 kernel: hub 2-0:1.0: USB hub found Nov 8 00:27:49.526998 kernel: ata8: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516480 irq 128 Nov 8 00:27:49.527007 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Nov 8 00:27:49.527072 kernel: hub 2-0:1.0: 10 ports detected Nov 8 00:27:49.527133 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Nov 8 00:27:49.527207 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Nov 8 00:27:49.527389 kernel: ata8: SATA link down (SStatus 0 SControl 300) Nov 8 00:27:49.527405 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 8 00:27:49.527583 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 8 00:27:49.527599 kernel: hub 1-14:1.0: USB hub found Nov 8 00:27:49.527799 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:27:49.527817 kernel: hub 1-14:1.0: 4 ports detected Nov 8 00:27:49.527981 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 8 00:27:49.527997 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 8 00:27:49.528165 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 8 00:27:49.528178 kernel: mlx5_core 0000:02:00.1: firmware version: 14.31.1014 Nov 8 00:27:49.528367 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:27:49.528384 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 8 00:27:49.528560 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 8 00:27:49.528578 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 8 00:27:49.528596 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:27:49.528609 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 00:27:49.528622 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 8 00:27:48.600297 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:27:49.565165 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 8 00:27:49.497469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:27:49.628353 kernel: ata1.00: Features: NCQ-prio Nov 8 00:27:49.628366 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 8 00:27:49.628384 kernel: ata2.00: Features: NCQ-prio Nov 8 00:27:49.559826 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:27:49.608442 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:27:49.628429 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:27:49.628501 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:27:49.647305 kernel: ata1.00: configured for UDMA/133 Nov 8 00:27:49.647322 kernel: ata2.00: configured for UDMA/133 Nov 8 00:27:49.647330 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 8 00:27:49.648465 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:27:49.740793 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Nov 8 00:27:49.740933 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 8 00:27:49.741056 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Nov 8 00:27:49.741281 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:27:50.247973 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 00:27:50.247994 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 00:27:50.248003 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 8 00:27:50.248101 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 8 00:27:50.248214 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 8 00:27:50.248291 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:27:50.248359 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Nov 8 00:27:50.248424 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 8 00:27:50.248487 kernel: sd 1:0:0:0: [sdb] Write Protect is off Nov 8 00:27:50.248551 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:27:50.248614 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 8 00:27:50.248679 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Nov 8 00:27:50.248742 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:27:50.248805 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 00:27:50.248814 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Nov 8 00:27:50.248875 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 00:27:50.248884 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 8 00:27:50.248956 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Nov 8 00:27:50.249021 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:27:50.249030 kernel: GPT:9289727 != 937703087 Nov 8 00:27:50.249038 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:27:50.249045 kernel: GPT:9289727 != 937703087 Nov 8 00:27:50.249052 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:27:50.249060 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:50.249067 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:27:50.249129 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:27:50.249138 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Nov 8 00:27:50.249205 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (694) Nov 8 00:27:50.249214 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (693) Nov 8 00:27:50.249222 kernel: usbcore: registered new interface driver usbhid Nov 8 00:27:50.249229 kernel: usbhid: USB HID core driver Nov 8 00:27:50.249236 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Nov 8 00:27:50.249316 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 8 00:27:50.181460 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:27:50.297782 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Nov 8 00:27:50.317669 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Nov 8 00:27:50.359442 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 8 00:27:50.438322 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 8 00:27:50.438421 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 8 00:27:50.438431 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 8 00:27:50.401951 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 8 00:27:50.453321 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 8 00:27:50.495486 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:27:50.525185 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 00:27:50.525200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:50.496116 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:27:50.557472 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 00:27:50.557484 disk-uuid[716]: Primary Header is updated. Nov 8 00:27:50.557484 disk-uuid[716]: Secondary Entries is updated. Nov 8 00:27:50.557484 disk-uuid[716]: Secondary Header is updated. Nov 8 00:27:50.610426 kernel: GPT:disk_guids don't match. Nov 8 00:27:50.610437 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:27:50.610444 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:50.627424 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:27:50.661309 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 00:27:50.661336 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:51.628329 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 00:27:51.648184 disk-uuid[717]: The operation has completed successfully. Nov 8 00:27:51.656499 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:51.682214 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:27:51.682335 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:27:51.720508 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:27:51.759297 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:27:51.759363 sh[743]: Success Nov 8 00:27:51.800383 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:27:51.817189 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:27:51.834625 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:27:51.881500 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:27:51.881522 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:27:51.903041 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:27:51.922244 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:27:51.940434 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:27:51.979250 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:27:51.981164 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:27:51.989649 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:27:51.995373 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:27:52.022690 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:27:52.066691 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:27:52.066719 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:27:52.076952 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:27:52.158137 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:27:52.158154 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:27:52.158162 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:27:52.158171 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:27:52.174550 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:27:52.174689 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:27:52.185342 systemd-networkd[923]: lo: Link UP Nov 8 00:27:52.185344 systemd-networkd[923]: lo: Gained carrier Nov 8 00:27:52.187853 systemd-networkd[923]: Enumeration completed Nov 8 00:27:52.188677 systemd-networkd[923]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:27:52.201873 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:27:52.219982 systemd-networkd[923]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:27:52.221893 systemd[1]: Reached target network.target - Network. Nov 8 00:27:52.250026 systemd-networkd[923]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:27:52.250609 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:27:52.319330 unknown[928]: fetched base config from "system" Nov 8 00:27:52.317251 ignition[928]: Ignition 2.19.0 Nov 8 00:27:52.319334 unknown[928]: fetched user config from "system" Nov 8 00:27:52.317256 ignition[928]: Stage: fetch-offline Nov 8 00:27:52.320278 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:27:52.317281 ignition[928]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:52.335751 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:27:52.317286 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 00:27:52.342474 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:27:52.317343 ignition[928]: parsed url from cmdline: "" Nov 8 00:27:52.317345 ignition[928]: no config URL provided Nov 8 00:27:52.451476 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Nov 8 00:27:52.446197 systemd-networkd[923]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:27:52.317348 ignition[928]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:27:52.317374 ignition[928]: parsing config with SHA512: f2dca2712e6c7c40822eb4a30add3e222cd1e974a3e0f9ab3f62e86180ae505b1d01378a0aeae5d434b7d4a7017cfd8e0c57fbdf53096afda415b8439fdebc4b Nov 8 00:27:52.319546 ignition[928]: fetch-offline: fetch-offline passed Nov 8 00:27:52.319548 ignition[928]: POST message to Packet Timeline Nov 8 00:27:52.319551 ignition[928]: POST Status error: resource requires networking Nov 8 00:27:52.319589 ignition[928]: Ignition finished successfully Nov 8 00:27:52.351350 ignition[939]: Ignition 2.19.0 Nov 8 00:27:52.351355 ignition[939]: Stage: kargs Nov 8 00:27:52.351487 ignition[939]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:52.351496 ignition[939]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 00:27:52.352192 ignition[939]: kargs: kargs passed Nov 8 00:27:52.352195 ignition[939]: POST message to Packet Timeline Nov 8 00:27:52.352207 ignition[939]: GET https://metadata.packet.net/metadata: attempt #1 Nov 8 00:27:52.352751 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50675->[::1]:53: read: connection refused Nov 8 00:27:52.553861 ignition[939]: GET https://metadata.packet.net/metadata: attempt #2 Nov 8 00:27:52.554869 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58860->[::1]:53: read: connection refused Nov 8 00:27:52.702281 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Nov 8 00:27:52.703756 systemd-networkd[923]: eno1: Link UP Nov 8 00:27:52.703985 systemd-networkd[923]: eno2: Link UP Nov 8 00:27:52.704196 systemd-networkd[923]: enp2s0f0np0: Link UP Nov 8 00:27:52.704455 systemd-networkd[923]: enp2s0f0np0: Gained carrier Nov 8 00:27:52.714828 systemd-networkd[923]: enp2s0f1np1: Link UP Nov 8 00:27:52.741436 systemd-networkd[923]: enp2s0f0np0: DHCPv4 address 139.178.94.39/31, gateway 139.178.94.38 acquired from 145.40.83.140 Nov 8 00:27:52.955333 ignition[939]: GET https://metadata.packet.net/metadata: attempt #3 Nov 8 00:27:52.956614 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38560->[::1]:53: read: connection refused Nov 8 00:27:53.485984 systemd-networkd[923]: enp2s0f1np1: Gained carrier Nov 8 00:27:53.741734 systemd-networkd[923]: enp2s0f0np0: Gained IPv6LL Nov 8 00:27:53.757517 ignition[939]: GET https://metadata.packet.net/metadata: attempt #4 Nov 8 00:27:53.758603 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52178->[::1]:53: read: connection refused Nov 8 00:27:55.277749 systemd-networkd[923]: enp2s0f1np1: Gained IPv6LL Nov 8 00:27:55.360348 ignition[939]: GET https://metadata.packet.net/metadata: attempt #5 Nov 8 00:27:55.361529 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41319->[::1]:53: read: connection refused Nov 8 00:27:58.563950 ignition[939]: GET https://metadata.packet.net/metadata: attempt #6 Nov 8 00:27:59.493163 ignition[939]: GET result: OK Nov 8 00:28:07.579184 ignition[939]: Ignition finished successfully Nov 8 00:28:07.584608 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:28:07.611552 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:28:07.617804 ignition[956]: Ignition 2.19.0 Nov 8 00:28:07.617808 ignition[956]: Stage: disks Nov 8 00:28:07.617918 ignition[956]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:07.617925 ignition[956]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 00:28:07.618462 ignition[956]: disks: disks passed Nov 8 00:28:07.618464 ignition[956]: POST message to Packet Timeline Nov 8 00:28:07.618473 ignition[956]: GET https://metadata.packet.net/metadata: attempt #1 Nov 8 00:28:09.654587 ignition[956]: GET result: OK Nov 8 00:28:10.488971 ignition[956]: Ignition finished successfully Nov 8 00:28:10.492456 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:28:10.507565 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:28:10.525515 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:28:10.547523 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:28:10.569681 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:28:10.589685 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:28:10.624490 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:28:10.660647 systemd-fsck[972]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:28:10.671824 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:28:10.694476 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:28:10.799819 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:28:10.815475 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:28:10.808730 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:28:10.841435 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:28:10.850343 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:28:10.974804 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (981) Nov 8 00:28:10.974817 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:10.974825 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:10.974832 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:28:10.974843 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:28:10.974850 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:28:10.875174 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:28:10.991905 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Nov 8 00:28:11.014443 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:28:11.014461 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:28:11.078437 coreos-metadata[983]: Nov 08 00:28:11.033 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 8 00:28:11.024438 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:28:11.109345 coreos-metadata[999]: Nov 08 00:28:11.035 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 8 00:28:11.041541 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:28:11.081489 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:28:11.139385 initrd-setup-root[1013]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:28:11.149362 initrd-setup-root[1020]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:28:11.159514 initrd-setup-root[1027]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:28:11.169277 initrd-setup-root[1034]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:28:11.181297 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:28:11.204449 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:28:11.244467 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:11.234878 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:28:11.245060 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:28:11.280430 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:28:11.299436 ignition[1101]: INFO : Ignition 2.19.0 Nov 8 00:28:11.299436 ignition[1101]: INFO : Stage: mount Nov 8 00:28:11.299436 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:11.299436 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 00:28:11.299436 ignition[1101]: INFO : mount: mount passed Nov 8 00:28:11.299436 ignition[1101]: INFO : POST message to Packet Timeline Nov 8 00:28:11.299436 ignition[1101]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 8 00:28:12.030916 coreos-metadata[999]: Nov 08 00:28:12.030 INFO Fetch successful Nov 8 00:28:12.068373 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 8 00:28:12.068448 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Nov 8 00:28:12.305760 ignition[1101]: INFO : GET result: OK Nov 8 00:28:12.698335 coreos-metadata[983]: Nov 08 00:28:12.698 INFO Fetch successful Nov 8 00:28:12.731082 coreos-metadata[983]: Nov 08 00:28:12.731 INFO wrote hostname ci-4081.3.6-n-8b27c00582 to /sysroot/etc/hostname Nov 8 00:28:12.732406 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:28:12.971764 ignition[1101]: INFO : Ignition finished successfully Nov 8 00:28:12.974727 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:28:13.009425 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:28:13.013144 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:28:13.078283 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1124) Nov 8 00:28:13.108166 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:13.108182 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:13.126233 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:28:13.165306 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:28:13.165342 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:28:13.179437 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:28:13.211014 ignition[1141]: INFO : Ignition 2.19.0 Nov 8 00:28:13.211014 ignition[1141]: INFO : Stage: files Nov 8 00:28:13.226503 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:13.226503 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 00:28:13.226503 ignition[1141]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:28:13.226503 ignition[1141]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:28:13.226503 ignition[1141]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:28:13.226503 ignition[1141]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:28:13.226503 ignition[1141]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:28:13.226503 ignition[1141]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:28:13.226503 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:28:13.226503 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:28:13.215574 unknown[1141]: wrote ssh authorized keys file for user: core Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:28:13.611592 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 8 00:28:13.790284 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:28:14.377616 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:28:14.377616 ignition[1141]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: files passed Nov 8 00:28:14.407500 ignition[1141]: INFO : POST message to Packet Timeline Nov 8 00:28:14.407500 ignition[1141]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 8 00:28:15.438586 ignition[1141]: INFO : GET result: OK Nov 8 00:28:15.884195 ignition[1141]: INFO : Ignition finished successfully Nov 8 00:28:15.885924 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:28:15.920483 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:28:15.930908 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:28:15.951721 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:28:15.951809 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:28:16.002546 initrd-setup-root-after-ignition[1180]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:16.002546 initrd-setup-root-after-ignition[1180]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:15.973940 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:28:16.051590 initrd-setup-root-after-ignition[1184]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:15.994880 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:28:16.028635 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:28:16.119931 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:28:16.119980 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:28:16.138679 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:28:16.149557 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:28:16.166590 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:28:16.177553 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:28:16.259535 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:28:16.285667 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:28:16.315147 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:28:16.315691 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:28:16.346035 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:28:16.365942 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:28:16.366374 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:28:16.403679 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:28:16.413976 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:28:16.432955 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:28:16.450962 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:28:16.472954 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:28:16.494982 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:28:16.515974 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:28:16.537994 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:28:16.558976 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:28:16.578949 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:28:16.597848 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:28:16.598274 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:28:16.623211 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:28:16.642996 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:28:16.663830 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:28:16.664314 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:28:16.685852 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:28:16.686286 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:28:16.716946 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:28:16.717429 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:28:16.737170 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:28:16.755812 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:28:16.760475 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:28:16.777971 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:28:16.797954 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:28:16.815901 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:28:16.816209 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:28:16.826141 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:28:16.826476 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:28:16.857055 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:28:16.966341 ignition[1205]: INFO : Ignition 2.19.0 Nov 8 00:28:16.966341 ignition[1205]: INFO : Stage: umount Nov 8 00:28:16.966341 ignition[1205]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:16.966341 ignition[1205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 00:28:16.966341 ignition[1205]: INFO : umount: umount passed Nov 8 00:28:16.966341 ignition[1205]: INFO : POST message to Packet Timeline Nov 8 00:28:16.966341 ignition[1205]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 8 00:28:16.857481 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:28:16.877055 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:28:16.877464 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:28:16.895023 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:28:16.895436 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:28:16.929405 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:28:16.932834 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:28:16.948438 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:28:16.948603 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:28:16.977548 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:28:16.977615 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:28:17.023506 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:28:17.024343 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:28:17.024459 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:28:17.030432 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:28:17.030560 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:28:19.589911 ignition[1205]: INFO : GET result: OK Nov 8 00:28:21.134211 ignition[1205]: INFO : Ignition finished successfully Nov 8 00:28:21.137130 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:28:21.137447 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:28:21.155523 systemd[1]: Stopped target network.target - Network. Nov 8 00:28:21.170505 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:28:21.170691 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:28:21.188585 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:28:21.188723 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:28:21.206642 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:28:21.206801 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:28:21.225786 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:28:21.225958 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:28:21.244780 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:28:21.244953 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:28:21.264176 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:28:21.274413 systemd-networkd[923]: enp2s0f1np1: DHCPv6 lease lost Nov 8 00:28:21.283705 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:28:21.290460 systemd-networkd[923]: enp2s0f0np0: DHCPv6 lease lost Nov 8 00:28:21.302400 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:28:21.302692 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:28:21.321568 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:28:21.321942 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:28:21.341790 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:28:21.341904 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:28:21.373494 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:28:21.400394 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:28:21.400439 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:28:21.419510 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:28:21.419599 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:28:21.439664 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:28:21.439830 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:28:21.457656 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:28:21.457824 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:28:21.478010 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:28:21.500743 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:28:21.501131 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:28:21.530802 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:28:21.530844 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:28:21.557361 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:28:21.557390 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:28:21.577460 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:28:21.577542 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:28:21.616436 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:28:21.616577 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:28:21.646646 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:28:21.646781 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:21.932477 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Nov 8 00:28:21.687606 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:28:21.698446 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:28:21.698592 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:28:21.719528 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:28:21.719661 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:28:21.738550 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:28:21.738676 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:28:21.760516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:28:21.760644 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:21.782540 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:28:21.782759 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:28:21.803031 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:28:21.803277 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:28:21.825290 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:28:21.857702 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:28:21.887146 systemd[1]: Switching root. Nov 8 00:28:22.037529 systemd-journald[267]: Journal stopped Nov 8 00:27:47.018735 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:27:47.018750 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:27:47.018757 kernel: BIOS-provided physical RAM map: Nov 8 00:27:47.018762 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 8 00:27:47.018766 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 8 00:27:47.018770 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 8 00:27:47.018775 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 8 00:27:47.018779 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 8 00:27:47.018783 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819c3fff] usable Nov 8 00:27:47.018787 kernel: BIOS-e820: [mem 0x00000000819c4000-0x00000000819c4fff] ACPI NVS Nov 8 00:27:47.018791 kernel: BIOS-e820: [mem 0x00000000819c5000-0x00000000819c5fff] reserved Nov 8 00:27:47.018796 kernel: BIOS-e820: [mem 0x00000000819c6000-0x000000008afcdfff] usable Nov 8 00:27:47.018801 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Nov 8 00:27:47.018805 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Nov 8 00:27:47.018810 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Nov 8 00:27:47.018815 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Nov 8 00:27:47.018821 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 8 00:27:47.018825 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 8 00:27:47.018830 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 8 00:27:47.018835 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 8 00:27:47.018840 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 8 00:27:47.018844 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 8 00:27:47.018849 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 8 00:27:47.018854 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 8 00:27:47.018858 kernel: NX (Execute Disable) protection: active Nov 8 00:27:47.018863 kernel: APIC: Static calls initialized Nov 8 00:27:47.018868 kernel: SMBIOS 3.2.1 present. Nov 8 00:27:47.018873 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 2.6 12/05/2024 Nov 8 00:27:47.018878 kernel: tsc: Detected 3400.000 MHz processor Nov 8 00:27:47.018883 kernel: tsc: Detected 3399.906 MHz TSC Nov 8 00:27:47.018888 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:27:47.018893 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:27:47.018898 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 8 00:27:47.018903 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Nov 8 00:27:47.018908 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:27:47.018913 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 8 00:27:47.018918 kernel: Using GB pages for direct mapping Nov 8 00:27:47.018923 kernel: ACPI: Early table checksum verification disabled Nov 8 00:27:47.018928 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 8 00:27:47.018933 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 8 00:27:47.018940 kernel: ACPI: FACP 0x000000008C58B5F0 000114 (v06 01072009 AMI 00010013) Nov 8 00:27:47.018945 kernel: ACPI: DSDT 0x000000008C54F268 03C386 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 8 00:27:47.018951 kernel: ACPI: FACS 0x000000008C66DF80 000040 Nov 8 00:27:47.018956 kernel: ACPI: APIC 0x000000008C58B708 00012C (v04 01072009 AMI 00010013) Nov 8 00:27:47.018962 kernel: ACPI: FPDT 0x000000008C58B838 000044 (v01 01072009 AMI 00010013) Nov 8 00:27:47.018967 kernel: ACPI: FIDT 0x000000008C58B880 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 8 00:27:47.018972 kernel: ACPI: MCFG 0x000000008C58B920 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 8 00:27:47.018977 kernel: ACPI: SPMI 0x000000008C58B960 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 8 00:27:47.018983 kernel: ACPI: SSDT 0x000000008C58B9A8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 8 00:27:47.018988 kernel: ACPI: SSDT 0x000000008C58D4C8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 8 00:27:47.018993 kernel: ACPI: SSDT 0x000000008C590690 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 8 00:27:47.018999 kernel: ACPI: HPET 0x000000008C5929C0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 00:27:47.019004 kernel: ACPI: SSDT 0x000000008C5929F8 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 8 00:27:47.019009 kernel: ACPI: SSDT 0x000000008C5939A8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 8 00:27:47.019014 kernel: ACPI: UEFI 0x000000008C5942A0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 00:27:47.019019 kernel: ACPI: LPIT 0x000000008C5942E8 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 00:27:47.019024 kernel: ACPI: SSDT 0x000000008C594380 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 8 00:27:47.019030 kernel: ACPI: SSDT 0x000000008C596B60 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 8 00:27:47.019035 kernel: ACPI: DBGP 0x000000008C598048 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 8 00:27:47.019040 kernel: ACPI: DBG2 0x000000008C598080 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 8 00:27:47.019046 kernel: ACPI: SSDT 0x000000008C5980D8 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 8 00:27:47.019051 kernel: ACPI: DMAR 0x000000008C599C40 000070 (v01 INTEL EDK2 00000002 01000013) Nov 8 00:27:47.019056 kernel: ACPI: SSDT 0x000000008C599CB0 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 8 00:27:47.019061 kernel: ACPI: TPM2 0x000000008C599DF8 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 8 00:27:47.019066 kernel: ACPI: SSDT 0x000000008C599E30 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 8 00:27:47.019071 kernel: ACPI: WSMT 0x000000008C59ABC0 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 8 00:27:47.019077 kernel: ACPI: EINJ 0x000000008C59ABE8 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 8 00:27:47.019082 kernel: ACPI: ERST 0x000000008C59AD18 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 8 00:27:47.019088 kernel: ACPI: BERT 0x000000008C59AF48 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 8 00:27:47.019093 kernel: ACPI: HEST 0x000000008C59AF78 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 8 00:27:47.019098 kernel: ACPI: SSDT 0x000000008C59B1F8 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 8 00:27:47.019103 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b5f0-0x8c58b703] Nov 8 00:27:47.019108 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b5ed] Nov 8 00:27:47.019113 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Nov 8 00:27:47.019118 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b708-0x8c58b833] Nov 8 00:27:47.019124 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b838-0x8c58b87b] Nov 8 00:27:47.019129 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b880-0x8c58b91b] Nov 8 00:27:47.019135 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b920-0x8c58b95b] Nov 8 00:27:47.019140 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b960-0x8c58b9a0] Nov 8 00:27:47.019145 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58b9a8-0x8c58d4c3] Nov 8 00:27:47.019150 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d4c8-0x8c59068d] Nov 8 00:27:47.019155 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590690-0x8c5929ba] Nov 8 00:27:47.019160 kernel: ACPI: Reserving HPET table memory at [mem 0x8c5929c0-0x8c5929f7] Nov 8 00:27:47.019165 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5929f8-0x8c5939a5] Nov 8 00:27:47.019170 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5939a8-0x8c59429e] Nov 8 00:27:47.019175 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c5942a0-0x8c5942e1] Nov 8 00:27:47.019181 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c5942e8-0x8c59437b] Nov 8 00:27:47.019186 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594380-0x8c596b5d] Nov 8 00:27:47.019191 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596b60-0x8c598041] Nov 8 00:27:47.019196 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c598048-0x8c59807b] Nov 8 00:27:47.019201 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598080-0x8c5980d3] Nov 8 00:27:47.019206 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5980d8-0x8c599c3e] Nov 8 00:27:47.019211 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599c40-0x8c599caf] Nov 8 00:27:47.019217 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599cb0-0x8c599df3] Nov 8 00:27:47.019222 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599df8-0x8c599e2b] Nov 8 00:27:47.019228 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599e30-0x8c59abbe] Nov 8 00:27:47.019233 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59abc0-0x8c59abe7] Nov 8 00:27:47.019238 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59abe8-0x8c59ad17] Nov 8 00:27:47.019245 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad18-0x8c59af47] Nov 8 00:27:47.019250 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59af48-0x8c59af77] Nov 8 00:27:47.019255 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59af78-0x8c59b1f3] Nov 8 00:27:47.019279 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b1f8-0x8c59b359] Nov 8 00:27:47.019284 kernel: No NUMA configuration found Nov 8 00:27:47.019304 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 8 00:27:47.019309 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 8 00:27:47.019315 kernel: Zone ranges: Nov 8 00:27:47.019320 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:27:47.019325 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:27:47.019330 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 8 00:27:47.019336 kernel: Movable zone start for each node Nov 8 00:27:47.019341 kernel: Early memory node ranges Nov 8 00:27:47.019346 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 8 00:27:47.019351 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 8 00:27:47.019356 kernel: node 0: [mem 0x0000000040400000-0x00000000819c3fff] Nov 8 00:27:47.019362 kernel: node 0: [mem 0x00000000819c6000-0x000000008afcdfff] Nov 8 00:27:47.019367 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Nov 8 00:27:47.019372 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 8 00:27:47.019377 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 8 00:27:47.019386 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 8 00:27:47.019392 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:27:47.019398 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 8 00:27:47.019403 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 8 00:27:47.019410 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 8 00:27:47.019415 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 8 00:27:47.019420 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Nov 8 00:27:47.019426 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 8 00:27:47.019431 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 8 00:27:47.019437 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 8 00:27:47.019442 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 8 00:27:47.019448 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 8 00:27:47.019453 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 8 00:27:47.019460 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 8 00:27:47.019465 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 8 00:27:47.019470 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 8 00:27:47.019476 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 8 00:27:47.019481 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 8 00:27:47.019487 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 8 00:27:47.019492 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 8 00:27:47.019497 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 8 00:27:47.019503 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 8 00:27:47.019509 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 8 00:27:47.019514 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 8 00:27:47.019520 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 8 00:27:47.019525 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 8 00:27:47.019531 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 8 00:27:47.019536 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:27:47.019541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:27:47.019547 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:27:47.019552 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:27:47.019559 kernel: TSC deadline timer available Nov 8 00:27:47.019564 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 8 00:27:47.019570 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 8 00:27:47.019575 kernel: Booting paravirtualized kernel on bare hardware Nov 8 00:27:47.019581 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:27:47.019587 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 8 00:27:47.019592 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 8 00:27:47.019597 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 8 00:27:47.019603 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 8 00:27:47.019610 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:27:47.019615 kernel: random: crng init done Nov 8 00:27:47.019621 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 8 00:27:47.019626 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 8 00:27:47.019632 kernel: Fallback order for Node 0: 0 Nov 8 00:27:47.019637 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Nov 8 00:27:47.019642 kernel: Policy zone: Normal Nov 8 00:27:47.019648 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:27:47.019654 kernel: software IO TLB: area num 16. Nov 8 00:27:47.019660 kernel: Memory: 32720308K/33452984K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 732416K reserved, 0K cma-reserved) Nov 8 00:27:47.019666 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 8 00:27:47.019671 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:27:47.019676 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:27:47.019682 kernel: Dynamic Preempt: voluntary Nov 8 00:27:47.019687 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:27:47.019695 kernel: rcu: RCU event tracing is enabled. Nov 8 00:27:47.019700 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 8 00:27:47.019707 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:27:47.019712 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:27:47.019718 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:27:47.019723 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:27:47.019729 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 8 00:27:47.019734 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 8 00:27:47.019739 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:27:47.019745 kernel: Console: colour dummy device 80x25 Nov 8 00:27:47.019750 kernel: printk: console [tty0] enabled Nov 8 00:27:47.019756 kernel: printk: console [ttyS1] enabled Nov 8 00:27:47.019763 kernel: ACPI: Core revision 20230628 Nov 8 00:27:47.019768 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Nov 8 00:27:47.019774 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:27:47.019779 kernel: DMAR: Host address width 39 Nov 8 00:27:47.019785 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 8 00:27:47.019790 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 8 00:27:47.019796 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Nov 8 00:27:47.019801 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 8 00:27:47.019807 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 8 00:27:47.019813 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 8 00:27:47.019819 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 8 00:27:47.019824 kernel: x2apic enabled Nov 8 00:27:47.019830 kernel: APIC: Switched APIC routing to: cluster x2apic Nov 8 00:27:47.019835 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:27:47.019840 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 8 00:27:47.019846 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 8 00:27:47.019852 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 8 00:27:47.019857 kernel: process: using mwait in idle threads Nov 8 00:27:47.019864 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:27:47.019869 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:27:47.019875 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:27:47.019880 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 8 00:27:47.019885 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 8 00:27:47.019891 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 8 00:27:47.019896 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 8 00:27:47.019902 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 8 00:27:47.019907 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:27:47.019914 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:27:47.019919 kernel: TAA: Mitigation: TSX disabled Nov 8 00:27:47.019925 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 8 00:27:47.019930 kernel: SRBDS: Mitigation: Microcode Nov 8 00:27:47.019936 kernel: GDS: Mitigation: Microcode Nov 8 00:27:47.019941 kernel: active return thunk: its_return_thunk Nov 8 00:27:47.019946 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:27:47.019952 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Nov 8 00:27:47.019957 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:27:47.019964 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:27:47.019969 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:27:47.019975 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 8 00:27:47.019980 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 8 00:27:47.019986 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:27:47.019991 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 8 00:27:47.019997 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 8 00:27:47.020002 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 8 00:27:47.020008 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:27:47.020014 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:27:47.020019 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:27:47.020025 kernel: landlock: Up and running. Nov 8 00:27:47.020030 kernel: SELinux: Initializing. Nov 8 00:27:47.020036 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:27:47.020041 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:27:47.020047 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 8 00:27:47.020052 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 00:27:47.020059 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 00:27:47.020064 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 00:27:47.020070 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 8 00:27:47.020075 kernel: ... version: 4 Nov 8 00:27:47.020081 kernel: ... bit width: 48 Nov 8 00:27:47.020086 kernel: ... generic registers: 4 Nov 8 00:27:47.020092 kernel: ... value mask: 0000ffffffffffff Nov 8 00:27:47.020097 kernel: ... max period: 00007fffffffffff Nov 8 00:27:47.020103 kernel: ... fixed-purpose events: 3 Nov 8 00:27:47.020109 kernel: ... event mask: 000000070000000f Nov 8 00:27:47.020114 kernel: signal: max sigframe size: 2032 Nov 8 00:27:47.020120 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 8 00:27:47.020125 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:27:47.020131 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:27:47.020137 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 8 00:27:47.020142 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:27:47.020147 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:27:47.020153 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Nov 8 00:27:47.020159 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:27:47.020165 kernel: smp: Brought up 1 node, 16 CPUs Nov 8 00:27:47.020171 kernel: smpboot: Max logical packages: 1 Nov 8 00:27:47.020176 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 8 00:27:47.020181 kernel: devtmpfs: initialized Nov 8 00:27:47.020187 kernel: x86/mm: Memory block size: 128MB Nov 8 00:27:47.020192 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819c4000-0x819c4fff] (4096 bytes) Nov 8 00:27:47.020198 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Nov 8 00:27:47.020203 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:27:47.020210 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 8 00:27:47.020215 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:27:47.020221 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:27:47.020226 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:27:47.020232 kernel: audit: type=2000 audit(1762561661.119:1): state=initialized audit_enabled=0 res=1 Nov 8 00:27:47.020237 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:27:47.020244 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:27:47.020250 kernel: cpuidle: using governor menu Nov 8 00:27:47.020275 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:27:47.020281 kernel: dca service started, version 1.12.1 Nov 8 00:27:47.020287 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 8 00:27:47.020292 kernel: PCI: Using configuration type 1 for base access Nov 8 00:27:47.020311 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 8 00:27:47.020317 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:27:47.020322 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:27:47.020328 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:27:47.020333 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:27:47.020338 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:27:47.020345 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:27:47.020350 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:27:47.020356 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:27:47.020361 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 8 00:27:47.020367 kernel: ACPI: Dynamic OEM Table Load: Nov 8 00:27:47.020372 kernel: ACPI: SSDT 0xFFFF9B3181B33400 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 8 00:27:47.020378 kernel: ACPI: Dynamic OEM Table Load: Nov 8 00:27:47.020383 kernel: ACPI: SSDT 0xFFFF9B3181B29000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 8 00:27:47.020389 kernel: ACPI: Dynamic OEM Table Load: Nov 8 00:27:47.020395 kernel: ACPI: SSDT 0xFFFF9B3180247F00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 8 00:27:47.020401 kernel: ACPI: Dynamic OEM Table Load: Nov 8 00:27:47.020406 kernel: ACPI: SSDT 0xFFFF9B3181E58800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 8 00:27:47.020412 kernel: ACPI: Dynamic OEM Table Load: Nov 8 00:27:47.020417 kernel: ACPI: SSDT 0xFFFF9B318012C000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 8 00:27:47.020422 kernel: ACPI: Dynamic OEM Table Load: Nov 8 00:27:47.020428 kernel: ACPI: SSDT 0xFFFF9B3181B35000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 8 00:27:47.020433 kernel: ACPI: _OSC evaluated successfully for all CPUs Nov 8 00:27:47.020439 kernel: ACPI: Interpreter enabled Nov 8 00:27:47.020445 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:27:47.020451 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:27:47.020456 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 8 00:27:47.020462 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 8 00:27:47.020467 kernel: HEST: Table parsing has been initialized. Nov 8 00:27:47.020472 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 8 00:27:47.020478 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:27:47.020483 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 8 00:27:47.020489 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 8 00:27:47.020496 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Nov 8 00:27:47.020501 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Nov 8 00:27:47.020507 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Nov 8 00:27:47.020512 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Nov 8 00:27:47.020518 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Nov 8 00:27:47.020523 kernel: ACPI: \_TZ_.FN00: New power resource Nov 8 00:27:47.020529 kernel: ACPI: \_TZ_.FN01: New power resource Nov 8 00:27:47.020534 kernel: ACPI: \_TZ_.FN02: New power resource Nov 8 00:27:47.020539 kernel: ACPI: \_TZ_.FN03: New power resource Nov 8 00:27:47.020545 kernel: ACPI: \_TZ_.FN04: New power resource Nov 8 00:27:47.020551 kernel: ACPI: \PIN_: New power resource Nov 8 00:27:47.020557 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 8 00:27:47.020631 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:27:47.020686 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 8 00:27:47.020735 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 8 00:27:47.020743 kernel: PCI host bridge to bus 0000:00 Nov 8 00:27:47.020792 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:27:47.020840 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:27:47.020883 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:27:47.020927 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 8 00:27:47.020970 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 8 00:27:47.021014 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 8 00:27:47.021070 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 8 00:27:47.021131 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 8 00:27:47.021181 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.021237 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Nov 8 00:27:47.021323 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.021379 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 8 00:27:47.021428 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 8 00:27:47.021484 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 8 00:27:47.021534 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 8 00:27:47.021588 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 8 00:27:47.021637 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 8 00:27:47.021685 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 8 00:27:47.021738 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 8 00:27:47.021789 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 8 00:27:47.021839 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 8 00:27:47.021894 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 8 00:27:47.021944 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 8 00:27:47.021996 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 8 00:27:47.022047 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 8 00:27:47.022101 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 8 00:27:47.022159 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 8 00:27:47.022210 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 8 00:27:47.022299 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 8 00:27:47.022350 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 8 00:27:47.022399 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 8 00:27:47.022453 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 8 00:27:47.022504 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 8 00:27:47.022554 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 8 00:27:47.022605 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 8 00:27:47.022655 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 8 00:27:47.022703 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 8 00:27:47.022753 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 8 00:27:47.022804 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 8 00:27:47.022853 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 8 00:27:47.022902 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 8 00:27:47.022951 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 8 00:27:47.023008 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 8 00:27:47.023059 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.023116 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 8 00:27:47.023167 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.023221 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 8 00:27:47.023309 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.023363 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 8 00:27:47.023416 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.023470 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Nov 8 00:27:47.023519 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.023573 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 8 00:27:47.023621 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 8 00:27:47.023677 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 8 00:27:47.023735 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 8 00:27:47.023785 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 8 00:27:47.023834 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 8 00:27:47.023887 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 8 00:27:47.023936 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 8 00:27:47.023987 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:27:47.024044 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Nov 8 00:27:47.024097 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 8 00:27:47.024148 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 8 00:27:47.024199 kernel: pci 0000:02:00.0: PME# supported from D3cold Nov 8 00:27:47.024252 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 8 00:27:47.024337 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 8 00:27:47.024392 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Nov 8 00:27:47.024443 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 8 00:27:47.024497 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 8 00:27:47.024550 kernel: pci 0000:02:00.1: PME# supported from D3cold Nov 8 00:27:47.024600 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 8 00:27:47.024651 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 8 00:27:47.024701 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Nov 8 00:27:47.024751 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Nov 8 00:27:47.024799 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 8 00:27:47.024852 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Nov 8 00:27:47.024906 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 8 00:27:47.024958 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 8 00:27:47.025009 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 8 00:27:47.025060 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Nov 8 00:27:47.025112 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 8 00:27:47.025161 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.025215 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Nov 8 00:27:47.025291 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 8 00:27:47.025361 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 8 00:27:47.025415 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Nov 8 00:27:47.025466 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Nov 8 00:27:47.025518 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 8 00:27:47.025568 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Nov 8 00:27:47.025620 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 8 00:27:47.025671 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Nov 8 00:27:47.025722 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Nov 8 00:27:47.025772 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 8 00:27:47.025821 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 8 00:27:47.025871 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Nov 8 00:27:47.025928 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Nov 8 00:27:47.025979 kernel: pci 0000:07:00.0: enabling Extended Tags Nov 8 00:27:47.026032 kernel: pci 0000:07:00.0: supports D1 D2 Nov 8 00:27:47.026083 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 00:27:47.026133 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Nov 8 00:27:47.026182 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Nov 8 00:27:47.026231 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Nov 8 00:27:47.026333 kernel: pci_bus 0000:08: extended config space not accessible Nov 8 00:27:47.026390 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Nov 8 00:27:47.026448 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 8 00:27:47.026501 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 8 00:27:47.026553 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Nov 8 00:27:47.026606 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:27:47.026658 kernel: pci 0000:08:00.0: supports D1 D2 Nov 8 00:27:47.026712 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 8 00:27:47.026764 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Nov 8 00:27:47.026815 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Nov 8 00:27:47.026869 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 8 00:27:47.026878 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 8 00:27:47.026884 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 8 00:27:47.026890 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 8 00:27:47.026895 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 8 00:27:47.026901 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 8 00:27:47.026907 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 8 00:27:47.026913 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 8 00:27:47.026919 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 8 00:27:47.026926 kernel: iommu: Default domain type: Translated Nov 8 00:27:47.026932 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:27:47.026937 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:27:47.026943 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:27:47.026949 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 8 00:27:47.026955 kernel: e820: reserve RAM buffer [mem 0x819c4000-0x83ffffff] Nov 8 00:27:47.026960 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Nov 8 00:27:47.026967 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Nov 8 00:27:47.026973 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 8 00:27:47.026979 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 8 00:27:47.027030 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Nov 8 00:27:47.027084 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Nov 8 00:27:47.027136 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:27:47.027144 kernel: vgaarb: loaded Nov 8 00:27:47.027150 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 8 00:27:47.027156 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Nov 8 00:27:47.027162 kernel: clocksource: Switched to clocksource tsc-early Nov 8 00:27:47.027169 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:27:47.027175 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:27:47.027181 kernel: pnp: PnP ACPI init Nov 8 00:27:47.027232 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 8 00:27:47.027327 kernel: pnp 00:02: [dma 0 disabled] Nov 8 00:27:47.027378 kernel: pnp 00:03: [dma 0 disabled] Nov 8 00:27:47.027427 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 8 00:27:47.027475 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 8 00:27:47.027524 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Nov 8 00:27:47.027569 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Nov 8 00:27:47.027616 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Nov 8 00:27:47.027660 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Nov 8 00:27:47.027706 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 8 00:27:47.027751 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 8 00:27:47.027798 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 8 00:27:47.027843 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 8 00:27:47.027893 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Nov 8 00:27:47.027942 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 8 00:27:47.027988 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 8 00:27:47.028032 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 8 00:27:47.028078 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 8 00:27:47.028126 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 8 00:27:47.028171 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Nov 8 00:27:47.028220 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Nov 8 00:27:47.028229 kernel: pnp: PnP ACPI: found 9 devices Nov 8 00:27:47.028235 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:27:47.028243 kernel: NET: Registered PF_INET protocol family Nov 8 00:27:47.028271 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:27:47.028277 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 8 00:27:47.028304 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:27:47.028310 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:27:47.028316 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 8 00:27:47.028322 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 8 00:27:47.028328 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:27:47.028334 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:27:47.028339 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:27:47.028345 kernel: NET: Registered PF_XDP protocol family Nov 8 00:27:47.028397 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 8 00:27:47.028448 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 8 00:27:47.028497 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 8 00:27:47.028548 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 8 00:27:47.028599 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 8 00:27:47.028651 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 8 00:27:47.028702 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 8 00:27:47.028754 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 8 00:27:47.028806 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Nov 8 00:27:47.028857 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Nov 8 00:27:47.028906 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 8 00:27:47.028958 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Nov 8 00:27:47.029008 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Nov 8 00:27:47.029060 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 8 00:27:47.029109 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 8 00:27:47.029159 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Nov 8 00:27:47.029209 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 8 00:27:47.029282 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 8 00:27:47.029351 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Nov 8 00:27:47.029401 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Nov 8 00:27:47.029456 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Nov 8 00:27:47.029506 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 8 00:27:47.029558 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Nov 8 00:27:47.029607 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Nov 8 00:27:47.029657 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Nov 8 00:27:47.029703 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 8 00:27:47.029747 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:27:47.029792 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:27:47.029835 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:27:47.029879 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 8 00:27:47.029925 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 8 00:27:47.029975 kernel: pci_bus 0000:02: resource 1 [mem 0x95100000-0x952fffff] Nov 8 00:27:47.030021 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 8 00:27:47.030074 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Nov 8 00:27:47.030120 kernel: pci_bus 0000:04: resource 1 [mem 0x95400000-0x954fffff] Nov 8 00:27:47.030169 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 8 00:27:47.030217 kernel: pci_bus 0000:05: resource 1 [mem 0x95300000-0x953fffff] Nov 8 00:27:47.030311 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 8 00:27:47.030357 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 8 00:27:47.030406 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Nov 8 00:27:47.030452 kernel: pci_bus 0000:08: resource 1 [mem 0x94000000-0x950fffff] Nov 8 00:27:47.030460 kernel: PCI: CLS 64 bytes, default 64 Nov 8 00:27:47.030466 kernel: DMAR: No ATSR found Nov 8 00:27:47.030474 kernel: DMAR: No SATC found Nov 8 00:27:47.030480 kernel: DMAR: dmar0: Using Queued invalidation Nov 8 00:27:47.030530 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 8 00:27:47.030581 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 8 00:27:47.030631 kernel: pci 0000:00:01.1: Adding to iommu group 1 Nov 8 00:27:47.030680 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 8 00:27:47.030729 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 8 00:27:47.030778 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 8 00:27:47.030827 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 8 00:27:47.030879 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 8 00:27:47.030927 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 8 00:27:47.030976 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 8 00:27:47.031024 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 8 00:27:47.031074 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 8 00:27:47.031122 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 8 00:27:47.031172 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 8 00:27:47.031221 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 8 00:27:47.031321 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 8 00:27:47.031371 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 8 00:27:47.031422 kernel: pci 0000:00:1c.1: Adding to iommu group 12 Nov 8 00:27:47.031471 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 8 00:27:47.031521 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 8 00:27:47.031570 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 8 00:27:47.031618 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 8 00:27:47.031669 kernel: pci 0000:02:00.0: Adding to iommu group 1 Nov 8 00:27:47.031722 kernel: pci 0000:02:00.1: Adding to iommu group 1 Nov 8 00:27:47.031773 kernel: pci 0000:04:00.0: Adding to iommu group 15 Nov 8 00:27:47.031824 kernel: pci 0000:05:00.0: Adding to iommu group 16 Nov 8 00:27:47.031875 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 8 00:27:47.031927 kernel: pci 0000:08:00.0: Adding to iommu group 17 Nov 8 00:27:47.031935 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 8 00:27:47.031941 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:27:47.031947 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Nov 8 00:27:47.031955 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 8 00:27:47.031961 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 8 00:27:47.031966 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 8 00:27:47.031972 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 8 00:27:47.032025 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 8 00:27:47.032033 kernel: Initialise system trusted keyrings Nov 8 00:27:47.032039 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 8 00:27:47.032045 kernel: Key type asymmetric registered Nov 8 00:27:47.032052 kernel: Asymmetric key parser 'x509' registered Nov 8 00:27:47.032058 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:27:47.032064 kernel: io scheduler mq-deadline registered Nov 8 00:27:47.032070 kernel: io scheduler kyber registered Nov 8 00:27:47.032076 kernel: io scheduler bfq registered Nov 8 00:27:47.032125 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 8 00:27:47.032174 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 122 Nov 8 00:27:47.032224 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 123 Nov 8 00:27:47.032318 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 124 Nov 8 00:27:47.032370 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 125 Nov 8 00:27:47.032419 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 126 Nov 8 00:27:47.032469 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 127 Nov 8 00:27:47.032525 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 8 00:27:47.032533 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 8 00:27:47.032539 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 8 00:27:47.032545 kernel: pstore: Using crash dump compression: deflate Nov 8 00:27:47.032553 kernel: pstore: Registered erst as persistent store backend Nov 8 00:27:47.032559 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:27:47.032565 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:27:47.032570 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:27:47.032576 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 8 00:27:47.032628 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 8 00:27:47.032637 kernel: i8042: PNP: No PS/2 controller found. Nov 8 00:27:47.032681 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 8 00:27:47.032730 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 8 00:27:47.032775 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-08T00:27:45 UTC (1762561665) Nov 8 00:27:47.032821 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 8 00:27:47.032829 kernel: intel_pstate: Intel P-state driver initializing Nov 8 00:27:47.032835 kernel: intel_pstate: Disabling energy efficiency optimization Nov 8 00:27:47.032841 kernel: intel_pstate: HWP enabled Nov 8 00:27:47.032847 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 8 00:27:47.032853 kernel: vesafb: scrolling: redraw Nov 8 00:27:47.032859 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 8 00:27:47.032866 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x00000000388354e9, using 768k, total 768k Nov 8 00:27:47.032872 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:27:47.032878 kernel: fb0: VESA VGA frame buffer device Nov 8 00:27:47.032884 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:27:47.032890 kernel: Segment Routing with IPv6 Nov 8 00:27:47.032895 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:27:47.032901 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:27:47.032907 kernel: Key type dns_resolver registered Nov 8 00:27:47.032913 kernel: microcode: Current revision: 0x00000102 Nov 8 00:27:47.032919 kernel: microcode: Microcode Update Driver: v2.2. Nov 8 00:27:47.032925 kernel: IPI shorthand broadcast: enabled Nov 8 00:27:47.032931 kernel: sched_clock: Marking stable (1661307766, 1363989216)->(4461564231, -1436267249) Nov 8 00:27:47.032937 kernel: registered taskstats version 1 Nov 8 00:27:47.032942 kernel: Loading compiled-in X.509 certificates Nov 8 00:27:47.032948 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:27:47.032954 kernel: Key type .fscrypt registered Nov 8 00:27:47.032960 kernel: Key type fscrypt-provisioning registered Nov 8 00:27:47.032965 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:27:47.032972 kernel: ima: No architecture policies found Nov 8 00:27:47.032978 kernel: clk: Disabling unused clocks Nov 8 00:27:47.032984 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:27:47.032989 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:27:47.032995 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:27:47.033001 kernel: Run /init as init process Nov 8 00:27:47.033007 kernel: with arguments: Nov 8 00:27:47.033012 kernel: /init Nov 8 00:27:47.033018 kernel: with environment: Nov 8 00:27:47.033025 kernel: HOME=/ Nov 8 00:27:47.033030 kernel: TERM=linux Nov 8 00:27:47.033037 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:27:47.033044 systemd[1]: Detected architecture x86-64. Nov 8 00:27:47.033051 systemd[1]: Running in initrd. Nov 8 00:27:47.033057 systemd[1]: No hostname configured, using default hostname. Nov 8 00:27:47.033063 systemd[1]: Hostname set to . Nov 8 00:27:47.033070 systemd[1]: Initializing machine ID from random generator. Nov 8 00:27:47.033076 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:27:47.033082 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:27:47.033088 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:27:47.033094 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:27:47.033101 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:27:47.033107 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:27:47.033113 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:27:47.033120 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:27:47.033127 kernel: tsc: Refined TSC clocksource calibration: 3407.974 MHz Nov 8 00:27:47.033133 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fbb0eafc, max_idle_ns: 440795256507 ns Nov 8 00:27:47.033139 kernel: clocksource: Switched to clocksource tsc Nov 8 00:27:47.033145 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:27:47.033151 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:27:47.033157 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:27:47.033164 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:27:47.033171 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:27:47.033177 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:27:47.033183 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:27:47.033189 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:27:47.033195 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:27:47.033201 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:27:47.033207 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:27:47.033213 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:27:47.033220 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:27:47.033226 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:27:47.033232 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:27:47.033238 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:27:47.033247 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:27:47.033253 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:27:47.033283 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:27:47.033290 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:27:47.033327 systemd-journald[267]: Collecting audit messages is disabled. Nov 8 00:27:47.033341 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:27:47.033348 systemd-journald[267]: Journal started Nov 8 00:27:47.033363 systemd-journald[267]: Runtime Journal (/run/log/journal/c473add8d6934c6ea00aa6871e10ab56) is 8.0M, max 639.9M, 631.9M free. Nov 8 00:27:47.066985 systemd-modules-load[268]: Inserted module 'overlay' Nov 8 00:27:47.076437 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:27:47.097247 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:27:47.097259 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:27:47.169474 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:27:47.169487 kernel: Bridge firewalling registered Nov 8 00:27:47.154431 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:27:47.159329 systemd-modules-load[268]: Inserted module 'br_netfilter' Nov 8 00:27:47.180511 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:27:47.191645 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:27:47.220609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:27:47.245696 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:27:47.252303 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:27:47.281743 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:27:47.289064 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:27:47.293192 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:27:47.294014 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:27:47.294676 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:27:47.299260 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:27:47.301101 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:27:47.303670 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:27:47.309701 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:27:47.322841 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:27:47.346990 systemd-resolved[299]: Positive Trust Anchors: Nov 8 00:27:47.346999 systemd-resolved[299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:27:47.347040 systemd-resolved[299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:27:47.349717 systemd-resolved[299]: Defaulting to hostname 'linux'. Nov 8 00:27:47.350524 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:27:47.356538 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:27:47.468361 dracut-cmdline[306]: dracut-dracut-053 Nov 8 00:27:47.468361 dracut-cmdline[306]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:27:47.549273 kernel: SCSI subsystem initialized Nov 8 00:27:47.572284 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:27:47.594273 kernel: iscsi: registered transport (tcp) Nov 8 00:27:47.627292 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:27:47.627309 kernel: QLogic iSCSI HBA Driver Nov 8 00:27:47.659736 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:27:47.679510 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:27:47.762316 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:27:47.762336 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:27:47.781949 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:27:47.841275 kernel: raid6: avx2x4 gen() 53273 MB/s Nov 8 00:27:47.873321 kernel: raid6: avx2x2 gen() 53564 MB/s Nov 8 00:27:47.909608 kernel: raid6: avx2x1 gen() 45259 MB/s Nov 8 00:27:47.909625 kernel: raid6: using algorithm avx2x2 gen() 53564 MB/s Nov 8 00:27:47.956677 kernel: raid6: .... xor() 31034 MB/s, rmw enabled Nov 8 00:27:47.956694 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:27:47.997274 kernel: xor: automatically using best checksumming function avx Nov 8 00:27:48.115288 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:27:48.121119 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:27:48.142406 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:27:48.149762 systemd-udevd[493]: Using default interface naming scheme 'v255'. Nov 8 00:27:48.153377 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:27:48.187435 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:27:48.214363 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Nov 8 00:27:48.229083 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:27:48.254491 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:27:48.346624 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:27:48.391545 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 8 00:27:48.391583 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 8 00:27:48.361620 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:27:48.416294 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:27:48.416311 kernel: PTP clock support registered Nov 8 00:27:48.396050 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:27:48.451053 kernel: libata version 3.00 loaded. Nov 8 00:27:48.451073 kernel: ACPI: bus type USB registered Nov 8 00:27:48.451081 kernel: usbcore: registered new interface driver usbfs Nov 8 00:27:48.396084 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:27:48.528308 kernel: usbcore: registered new interface driver hub Nov 8 00:27:48.528324 kernel: usbcore: registered new device driver usb Nov 8 00:27:48.528336 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:27:48.528344 kernel: AES CTR mode by8 optimization enabled Nov 8 00:27:48.528352 kernel: ahci 0000:00:17.0: version 3.0 Nov 8 00:27:48.490333 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:27:48.572352 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Nov 8 00:27:48.572438 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 8 00:27:48.547282 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:27:48.601801 kernel: scsi host0: ahci Nov 8 00:27:48.603466 kernel: scsi host1: ahci Nov 8 00:27:48.603565 kernel: scsi host2: ahci Nov 8 00:27:48.603585 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 8 00:27:48.547328 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:27:49.525018 kernel: scsi host3: ahci Nov 8 00:27:49.525175 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 8 00:27:49.525184 kernel: mlx5_core 0000:02:00.0: firmware version: 14.31.1014 Nov 8 00:27:49.525269 kernel: scsi host4: ahci Nov 8 00:27:49.525337 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 8 00:27:49.525405 kernel: scsi host5: ahci Nov 8 00:27:49.525470 kernel: igb 0000:04:00.0: added PHC on eth0 Nov 8 00:27:49.525539 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 8 00:27:49.525604 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d4 Nov 8 00:27:49.525666 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Nov 8 00:27:49.525730 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 8 00:27:49.525792 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 8 00:27:49.525856 kernel: igb 0000:05:00.0: added PHC on eth1 Nov 8 00:27:49.525920 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 8 00:27:49.525986 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d5 Nov 8 00:27:49.526053 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Nov 8 00:27:49.526119 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 8 00:27:49.526181 kernel: scsi host6: ahci Nov 8 00:27:49.526249 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 8 00:27:49.526362 kernel: scsi host7: ahci Nov 8 00:27:49.526426 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 8 00:27:49.526489 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 128 Nov 8 00:27:49.526500 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 8 00:27:49.526562 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Nov 8 00:27:49.526625 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 128 Nov 8 00:27:49.526634 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 8 00:27:49.526695 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 128 Nov 8 00:27:49.526703 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 8 00:27:49.526763 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 128 Nov 8 00:27:49.526772 kernel: hub 1-0:1.0: USB hub found Nov 8 00:27:49.526843 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 128 Nov 8 00:27:49.526852 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 128 Nov 8 00:27:49.526859 kernel: hub 1-0:1.0: 16 ports detected Nov 8 00:27:49.526921 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 128 Nov 8 00:27:49.526930 kernel: hub 2-0:1.0: USB hub found Nov 8 00:27:49.526998 kernel: ata8: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516480 irq 128 Nov 8 00:27:49.527007 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Nov 8 00:27:49.527072 kernel: hub 2-0:1.0: 10 ports detected Nov 8 00:27:49.527133 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Nov 8 00:27:49.527207 kernel: mlx5_core 0000:02:00.0: Port module event: module 0, Cable plugged Nov 8 00:27:49.527389 kernel: ata8: SATA link down (SStatus 0 SControl 300) Nov 8 00:27:49.527405 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 8 00:27:49.527583 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 8 00:27:49.527599 kernel: hub 1-14:1.0: USB hub found Nov 8 00:27:49.527799 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:27:49.527817 kernel: hub 1-14:1.0: 4 ports detected Nov 8 00:27:49.527981 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 8 00:27:49.527997 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 8 00:27:49.528165 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 8 00:27:49.528178 kernel: mlx5_core 0000:02:00.1: firmware version: 14.31.1014 Nov 8 00:27:49.528367 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:27:49.528384 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 8 00:27:49.528560 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 8 00:27:49.528578 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 8 00:27:49.528596 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:27:49.528609 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 00:27:49.528622 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 8 00:27:48.600297 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:27:49.565165 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 8 00:27:49.497469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:27:49.628353 kernel: ata1.00: Features: NCQ-prio Nov 8 00:27:49.628366 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 8 00:27:49.628384 kernel: ata2.00: Features: NCQ-prio Nov 8 00:27:49.559826 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:27:49.608442 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:27:49.628429 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:27:49.628501 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:27:49.647305 kernel: ata1.00: configured for UDMA/133 Nov 8 00:27:49.647322 kernel: ata2.00: configured for UDMA/133 Nov 8 00:27:49.647330 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 8 00:27:49.648465 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:27:49.740793 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Nov 8 00:27:49.740933 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 8 00:27:49.741056 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Nov 8 00:27:49.741281 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:27:50.247973 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 00:27:50.247994 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 00:27:50.248003 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 8 00:27:50.248101 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 8 00:27:50.248214 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 8 00:27:50.248291 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:27:50.248359 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Nov 8 00:27:50.248424 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 8 00:27:50.248487 kernel: sd 1:0:0:0: [sdb] Write Protect is off Nov 8 00:27:50.248551 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:27:50.248614 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 8 00:27:50.248679 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Nov 8 00:27:50.248742 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:27:50.248805 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 00:27:50.248814 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Nov 8 00:27:50.248875 kernel: ata2.00: Enabling discard_zeroes_data Nov 8 00:27:50.248884 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 8 00:27:50.248956 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Nov 8 00:27:50.249021 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:27:50.249030 kernel: GPT:9289727 != 937703087 Nov 8 00:27:50.249038 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:27:50.249045 kernel: GPT:9289727 != 937703087 Nov 8 00:27:50.249052 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:27:50.249060 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:50.249067 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:27:50.249129 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:27:50.249138 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth1 Nov 8 00:27:50.249205 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (694) Nov 8 00:27:50.249214 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by (udev-worker) (693) Nov 8 00:27:50.249222 kernel: usbcore: registered new interface driver usbhid Nov 8 00:27:50.249229 kernel: usbhid: USB HID core driver Nov 8 00:27:50.249236 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth0 Nov 8 00:27:50.249316 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 8 00:27:50.181460 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:27:50.297782 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Nov 8 00:27:50.317669 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Nov 8 00:27:50.359442 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 8 00:27:50.438322 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 8 00:27:50.438421 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 8 00:27:50.438431 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 8 00:27:50.401951 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 8 00:27:50.453321 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 8 00:27:50.495486 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:27:50.525185 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 00:27:50.525200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:50.496116 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:27:50.557472 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 00:27:50.557484 disk-uuid[716]: Primary Header is updated. Nov 8 00:27:50.557484 disk-uuid[716]: Secondary Entries is updated. Nov 8 00:27:50.557484 disk-uuid[716]: Secondary Header is updated. Nov 8 00:27:50.610426 kernel: GPT:disk_guids don't match. Nov 8 00:27:50.610437 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:27:50.610444 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:50.627424 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:27:50.661309 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 00:27:50.661336 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:51.628329 kernel: ata1.00: Enabling discard_zeroes_data Nov 8 00:27:51.648184 disk-uuid[717]: The operation has completed successfully. Nov 8 00:27:51.656499 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:27:51.682214 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:27:51.682335 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:27:51.720508 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:27:51.759297 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:27:51.759363 sh[743]: Success Nov 8 00:27:51.800383 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:27:51.817189 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:27:51.834625 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:27:51.881500 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:27:51.881522 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:27:51.903041 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:27:51.922244 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:27:51.940434 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:27:51.979250 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:27:51.981164 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:27:51.989649 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:27:51.995373 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:27:52.022690 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:27:52.066691 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:27:52.066719 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:27:52.076952 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:27:52.158137 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:27:52.158154 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:27:52.158162 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:27:52.158171 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:27:52.174550 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:27:52.174689 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:27:52.185342 systemd-networkd[923]: lo: Link UP Nov 8 00:27:52.185344 systemd-networkd[923]: lo: Gained carrier Nov 8 00:27:52.187853 systemd-networkd[923]: Enumeration completed Nov 8 00:27:52.188677 systemd-networkd[923]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:27:52.201873 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:27:52.219982 systemd-networkd[923]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:27:52.221893 systemd[1]: Reached target network.target - Network. Nov 8 00:27:52.250026 systemd-networkd[923]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:27:52.250609 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:27:52.319330 unknown[928]: fetched base config from "system" Nov 8 00:27:52.317251 ignition[928]: Ignition 2.19.0 Nov 8 00:27:52.319334 unknown[928]: fetched user config from "system" Nov 8 00:27:52.317256 ignition[928]: Stage: fetch-offline Nov 8 00:27:52.320278 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:27:52.317281 ignition[928]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:52.335751 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:27:52.317286 ignition[928]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 00:27:52.342474 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:27:52.317343 ignition[928]: parsed url from cmdline: "" Nov 8 00:27:52.317345 ignition[928]: no config URL provided Nov 8 00:27:52.451476 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Nov 8 00:27:52.446197 systemd-networkd[923]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:27:52.317348 ignition[928]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:27:52.317374 ignition[928]: parsing config with SHA512: f2dca2712e6c7c40822eb4a30add3e222cd1e974a3e0f9ab3f62e86180ae505b1d01378a0aeae5d434b7d4a7017cfd8e0c57fbdf53096afda415b8439fdebc4b Nov 8 00:27:52.319546 ignition[928]: fetch-offline: fetch-offline passed Nov 8 00:27:52.319548 ignition[928]: POST message to Packet Timeline Nov 8 00:27:52.319551 ignition[928]: POST Status error: resource requires networking Nov 8 00:27:52.319589 ignition[928]: Ignition finished successfully Nov 8 00:27:52.351350 ignition[939]: Ignition 2.19.0 Nov 8 00:27:52.351355 ignition[939]: Stage: kargs Nov 8 00:27:52.351487 ignition[939]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:27:52.351496 ignition[939]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 00:27:52.352192 ignition[939]: kargs: kargs passed Nov 8 00:27:52.352195 ignition[939]: POST message to Packet Timeline Nov 8 00:27:52.352207 ignition[939]: GET https://metadata.packet.net/metadata: attempt #1 Nov 8 00:27:52.352751 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50675->[::1]:53: read: connection refused Nov 8 00:27:52.553861 ignition[939]: GET https://metadata.packet.net/metadata: attempt #2 Nov 8 00:27:52.554869 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58860->[::1]:53: read: connection refused Nov 8 00:27:52.702281 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Nov 8 00:27:52.703756 systemd-networkd[923]: eno1: Link UP Nov 8 00:27:52.703985 systemd-networkd[923]: eno2: Link UP Nov 8 00:27:52.704196 systemd-networkd[923]: enp2s0f0np0: Link UP Nov 8 00:27:52.704455 systemd-networkd[923]: enp2s0f0np0: Gained carrier Nov 8 00:27:52.714828 systemd-networkd[923]: enp2s0f1np1: Link UP Nov 8 00:27:52.741436 systemd-networkd[923]: enp2s0f0np0: DHCPv4 address 139.178.94.39/31, gateway 139.178.94.38 acquired from 145.40.83.140 Nov 8 00:27:52.955333 ignition[939]: GET https://metadata.packet.net/metadata: attempt #3 Nov 8 00:27:52.956614 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38560->[::1]:53: read: connection refused Nov 8 00:27:53.485984 systemd-networkd[923]: enp2s0f1np1: Gained carrier Nov 8 00:27:53.741734 systemd-networkd[923]: enp2s0f0np0: Gained IPv6LL Nov 8 00:27:53.757517 ignition[939]: GET https://metadata.packet.net/metadata: attempt #4 Nov 8 00:27:53.758603 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:52178->[::1]:53: read: connection refused Nov 8 00:27:55.277749 systemd-networkd[923]: enp2s0f1np1: Gained IPv6LL Nov 8 00:27:55.360348 ignition[939]: GET https://metadata.packet.net/metadata: attempt #5 Nov 8 00:27:55.361529 ignition[939]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:41319->[::1]:53: read: connection refused Nov 8 00:27:58.563950 ignition[939]: GET https://metadata.packet.net/metadata: attempt #6 Nov 8 00:27:59.493163 ignition[939]: GET result: OK Nov 8 00:28:07.579184 ignition[939]: Ignition finished successfully Nov 8 00:28:07.584608 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:28:07.611552 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:28:07.617804 ignition[956]: Ignition 2.19.0 Nov 8 00:28:07.617808 ignition[956]: Stage: disks Nov 8 00:28:07.617918 ignition[956]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:07.617925 ignition[956]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 00:28:07.618462 ignition[956]: disks: disks passed Nov 8 00:28:07.618464 ignition[956]: POST message to Packet Timeline Nov 8 00:28:07.618473 ignition[956]: GET https://metadata.packet.net/metadata: attempt #1 Nov 8 00:28:09.654587 ignition[956]: GET result: OK Nov 8 00:28:10.488971 ignition[956]: Ignition finished successfully Nov 8 00:28:10.492456 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:28:10.507565 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:28:10.525515 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:28:10.547523 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:28:10.569681 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:28:10.589685 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:28:10.624490 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:28:10.660647 systemd-fsck[972]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:28:10.671824 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:28:10.694476 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:28:10.799819 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:28:10.815475 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:28:10.808730 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:28:10.841435 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:28:10.850343 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:28:10.974804 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (981) Nov 8 00:28:10.974817 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:10.974825 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:10.974832 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:28:10.974843 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:28:10.974850 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:28:10.875174 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:28:10.991905 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Nov 8 00:28:11.014443 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:28:11.014461 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:28:11.078437 coreos-metadata[983]: Nov 08 00:28:11.033 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 8 00:28:11.024438 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:28:11.109345 coreos-metadata[999]: Nov 08 00:28:11.035 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 8 00:28:11.041541 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:28:11.081489 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:28:11.139385 initrd-setup-root[1013]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:28:11.149362 initrd-setup-root[1020]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:28:11.159514 initrd-setup-root[1027]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:28:11.169277 initrd-setup-root[1034]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:28:11.181297 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:28:11.204449 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:28:11.244467 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:11.234878 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:28:11.245060 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:28:11.280430 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:28:11.299436 ignition[1101]: INFO : Ignition 2.19.0 Nov 8 00:28:11.299436 ignition[1101]: INFO : Stage: mount Nov 8 00:28:11.299436 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:11.299436 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 00:28:11.299436 ignition[1101]: INFO : mount: mount passed Nov 8 00:28:11.299436 ignition[1101]: INFO : POST message to Packet Timeline Nov 8 00:28:11.299436 ignition[1101]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 8 00:28:12.030916 coreos-metadata[999]: Nov 08 00:28:12.030 INFO Fetch successful Nov 8 00:28:12.068373 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 8 00:28:12.068448 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Nov 8 00:28:12.305760 ignition[1101]: INFO : GET result: OK Nov 8 00:28:12.698335 coreos-metadata[983]: Nov 08 00:28:12.698 INFO Fetch successful Nov 8 00:28:12.731082 coreos-metadata[983]: Nov 08 00:28:12.731 INFO wrote hostname ci-4081.3.6-n-8b27c00582 to /sysroot/etc/hostname Nov 8 00:28:12.732406 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:28:12.971764 ignition[1101]: INFO : Ignition finished successfully Nov 8 00:28:12.974727 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:28:13.009425 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:28:13.013144 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:28:13.078283 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1124) Nov 8 00:28:13.108166 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:28:13.108182 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:28:13.126233 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:28:13.165306 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:28:13.165342 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:28:13.179437 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:28:13.211014 ignition[1141]: INFO : Ignition 2.19.0 Nov 8 00:28:13.211014 ignition[1141]: INFO : Stage: files Nov 8 00:28:13.226503 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:13.226503 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 00:28:13.226503 ignition[1141]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:28:13.226503 ignition[1141]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:28:13.226503 ignition[1141]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:28:13.226503 ignition[1141]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:28:13.226503 ignition[1141]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:28:13.226503 ignition[1141]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:28:13.226503 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:28:13.226503 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:28:13.215574 unknown[1141]: wrote ssh authorized keys file for user: core Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:28:13.362450 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:28:13.611592 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 8 00:28:13.790284 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:28:14.377616 ignition[1141]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:28:14.377616 ignition[1141]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:28:14.407500 ignition[1141]: INFO : files: files passed Nov 8 00:28:14.407500 ignition[1141]: INFO : POST message to Packet Timeline Nov 8 00:28:14.407500 ignition[1141]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 8 00:28:15.438586 ignition[1141]: INFO : GET result: OK Nov 8 00:28:15.884195 ignition[1141]: INFO : Ignition finished successfully Nov 8 00:28:15.885924 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:28:15.920483 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:28:15.930908 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:28:15.951721 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:28:15.951809 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:28:16.002546 initrd-setup-root-after-ignition[1180]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:16.002546 initrd-setup-root-after-ignition[1180]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:15.973940 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:28:16.051590 initrd-setup-root-after-ignition[1184]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:28:15.994880 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:28:16.028635 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:28:16.119931 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:28:16.119980 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:28:16.138679 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:28:16.149557 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:28:16.166590 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:28:16.177553 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:28:16.259535 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:28:16.285667 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:28:16.315147 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:28:16.315691 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:28:16.346035 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:28:16.365942 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:28:16.366374 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:28:16.403679 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:28:16.413976 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:28:16.432955 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:28:16.450962 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:28:16.472954 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:28:16.494982 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:28:16.515974 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:28:16.537994 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:28:16.558976 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:28:16.578949 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:28:16.597848 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:28:16.598274 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:28:16.623211 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:28:16.642996 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:28:16.663830 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:28:16.664314 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:28:16.685852 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:28:16.686286 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:28:16.716946 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:28:16.717429 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:28:16.737170 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:28:16.755812 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:28:16.760475 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:28:16.777971 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:28:16.797954 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:28:16.815901 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:28:16.816209 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:28:16.826141 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:28:16.826476 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:28:16.857055 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:28:16.966341 ignition[1205]: INFO : Ignition 2.19.0 Nov 8 00:28:16.966341 ignition[1205]: INFO : Stage: umount Nov 8 00:28:16.966341 ignition[1205]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:28:16.966341 ignition[1205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 8 00:28:16.966341 ignition[1205]: INFO : umount: umount passed Nov 8 00:28:16.966341 ignition[1205]: INFO : POST message to Packet Timeline Nov 8 00:28:16.966341 ignition[1205]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 8 00:28:16.857481 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:28:16.877055 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:28:16.877464 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:28:16.895023 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:28:16.895436 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:28:16.929405 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:28:16.932834 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:28:16.948438 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:28:16.948603 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:28:16.977548 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:28:16.977615 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:28:17.023506 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:28:17.024343 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:28:17.024459 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:28:17.030432 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:28:17.030560 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:28:19.589911 ignition[1205]: INFO : GET result: OK Nov 8 00:28:21.134211 ignition[1205]: INFO : Ignition finished successfully Nov 8 00:28:21.137130 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:28:21.137447 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:28:21.155523 systemd[1]: Stopped target network.target - Network. Nov 8 00:28:21.170505 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:28:21.170691 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:28:21.188585 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:28:21.188723 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:28:21.206642 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:28:21.206801 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:28:21.225786 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:28:21.225958 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:28:21.244780 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:28:21.244953 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:28:21.264176 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:28:21.274413 systemd-networkd[923]: enp2s0f1np1: DHCPv6 lease lost Nov 8 00:28:21.283705 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:28:21.290460 systemd-networkd[923]: enp2s0f0np0: DHCPv6 lease lost Nov 8 00:28:21.302400 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:28:21.302692 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:28:21.321568 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:28:21.321942 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:28:21.341790 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:28:21.341904 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:28:21.373494 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:28:21.400394 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:28:21.400439 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:28:21.419510 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:28:21.419599 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:28:21.439664 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:28:21.439830 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:28:21.457656 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:28:21.457824 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:28:21.478010 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:28:21.500743 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:28:21.501131 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:28:21.530802 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:28:21.530844 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:28:21.557361 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:28:21.557390 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:28:21.577460 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:28:21.577542 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:28:21.616436 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:28:21.616577 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:28:21.646646 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:28:21.646781 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:28:21.932477 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Nov 8 00:28:21.687606 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:28:21.698446 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:28:21.698592 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:28:21.719528 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:28:21.719661 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:28:21.738550 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:28:21.738676 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:28:21.760516 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:28:21.760644 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:21.782540 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:28:21.782759 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:28:21.803031 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:28:21.803277 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:28:21.825290 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:28:21.857702 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:28:21.887146 systemd[1]: Switching root. Nov 8 00:28:22.037529 systemd-journald[267]: Journal stopped Nov 8 00:28:24.670026 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:28:24.670041 kernel: SELinux: policy capability open_perms=1 Nov 8 00:28:24.670048 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:28:24.670054 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:28:24.670060 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:28:24.670065 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:28:24.670071 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:28:24.670077 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:28:24.670083 kernel: audit: type=1403 audit(1762561702.248:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:28:24.670090 systemd[1]: Successfully loaded SELinux policy in 156.964ms. Nov 8 00:28:24.670098 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.047ms. Nov 8 00:28:24.670105 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:28:24.670111 systemd[1]: Detected architecture x86-64. Nov 8 00:28:24.670117 systemd[1]: Detected first boot. Nov 8 00:28:24.670124 systemd[1]: Hostname set to . Nov 8 00:28:24.670134 systemd[1]: Initializing machine ID from random generator. Nov 8 00:28:24.670141 zram_generator::config[1256]: No configuration found. Nov 8 00:28:24.670148 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:28:24.670154 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:28:24.670160 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:28:24.670167 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:28:24.670173 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:28:24.670181 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:28:24.670188 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:28:24.670194 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:28:24.670201 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:28:24.670208 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:28:24.670215 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:28:24.670221 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:28:24.670229 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:28:24.670236 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:28:24.670245 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:28:24.670252 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:28:24.670259 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:28:24.670266 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:28:24.670273 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Nov 8 00:28:24.670279 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:28:24.670287 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:28:24.670294 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:28:24.670301 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:28:24.670309 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:28:24.670317 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:28:24.670324 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:28:24.670330 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:28:24.670338 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:28:24.670345 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:28:24.670352 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:28:24.670358 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:28:24.670365 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:28:24.670372 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:28:24.670380 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:28:24.670387 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:28:24.670394 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:28:24.670401 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:28:24.670408 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:24.670415 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:28:24.670422 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:28:24.670431 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:28:24.670438 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:28:24.670445 systemd[1]: Reached target machines.target - Containers. Nov 8 00:28:24.670452 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:28:24.670459 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:28:24.670466 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:28:24.670473 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:28:24.670480 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:28:24.670487 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:28:24.670495 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:28:24.670502 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:28:24.670509 kernel: ACPI: bus type drm_connector registered Nov 8 00:28:24.670515 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:28:24.670522 kernel: fuse: init (API version 7.39) Nov 8 00:28:24.670529 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:28:24.670536 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:28:24.670542 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:28:24.670550 kernel: loop: module loaded Nov 8 00:28:24.670557 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:28:24.670564 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:28:24.670571 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:28:24.670586 systemd-journald[1359]: Collecting audit messages is disabled. Nov 8 00:28:24.670602 systemd-journald[1359]: Journal started Nov 8 00:28:24.670617 systemd-journald[1359]: Runtime Journal (/run/log/journal/b8dd3f14423743dc86a855c067863bb5) is 8.0M, max 639.9M, 631.9M free. Nov 8 00:28:22.800879 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:28:22.828879 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:28:22.829743 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:28:24.698343 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:28:24.731288 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:28:24.765314 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:28:24.797293 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:28:24.831610 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:28:24.831638 systemd[1]: Stopped verity-setup.service. Nov 8 00:28:24.894292 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:24.915441 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:28:24.924825 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:28:24.935506 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:28:24.945502 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:28:24.955495 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:28:24.965473 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:28:24.976471 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:28:24.986602 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:28:24.998661 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:28:25.009948 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:28:25.010228 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:28:25.023112 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:28:25.023479 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:28:25.035216 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:28:25.035584 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:28:25.047110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:28:25.047473 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:28:25.059128 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:28:25.059495 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:28:25.071432 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:28:25.071829 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:28:25.082190 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:28:25.093208 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:28:25.105145 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:28:25.117092 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:28:25.150457 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:28:25.176556 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:28:25.188083 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:28:25.198428 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:28:25.198452 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:28:25.210288 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:28:25.234652 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:28:25.247143 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:28:25.257519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:28:25.258884 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:28:25.268883 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:28:25.279386 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:28:25.280009 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:28:25.285292 systemd-journald[1359]: Time spent on flushing to /var/log/journal/b8dd3f14423743dc86a855c067863bb5 is 13.137ms for 1380 entries. Nov 8 00:28:25.285292 systemd-journald[1359]: System Journal (/var/log/journal/b8dd3f14423743dc86a855c067863bb5) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:28:25.321835 systemd-journald[1359]: Received client request to flush runtime journal. Nov 8 00:28:25.297381 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:28:25.317397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:28:25.328064 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:28:25.340027 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:28:25.356996 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:28:25.365245 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:28:25.376465 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:28:25.387411 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:28:25.389043 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Nov 8 00:28:25.389053 systemd-tmpfiles[1394]: ACLs are not supported, ignoring. Nov 8 00:28:25.404540 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:28:25.412247 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:28:25.423484 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:28:25.434502 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:28:25.446464 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:28:25.462425 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:28:25.468248 kernel: loop1: detected capacity change from 0 to 8 Nov 8 00:28:25.481212 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:28:25.506506 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:28:25.524083 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:28:25.528287 kernel: loop2: detected capacity change from 0 to 142488 Nov 8 00:28:25.538844 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:28:25.539332 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:28:25.550824 udevadm[1395]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:28:25.561196 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:28:25.580467 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:28:25.588382 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Nov 8 00:28:25.588393 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Nov 8 00:28:25.591579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:28:25.622432 kernel: loop3: detected capacity change from 0 to 219144 Nov 8 00:28:25.653040 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:28:25.661614 ldconfig[1385]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:28:25.664584 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:28:25.701304 kernel: loop4: detected capacity change from 0 to 140768 Nov 8 00:28:25.702419 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:28:25.714515 systemd-udevd[1420]: Using default interface naming scheme 'v255'. Nov 8 00:28:25.730184 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:28:25.733253 kernel: loop5: detected capacity change from 0 to 8 Nov 8 00:28:25.752254 kernel: loop6: detected capacity change from 0 to 142488 Nov 8 00:28:25.754900 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Nov 8 00:28:25.770586 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Nov 8 00:28:25.770638 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1430) Nov 8 00:28:25.770653 kernel: ACPI: button: Sleep Button [SLPB] Nov 8 00:28:25.806420 kernel: loop7: detected capacity change from 0 to 219144 Nov 8 00:28:25.806488 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 8 00:28:25.819308 (sd-merge)[1421]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Nov 8 00:28:25.861485 kernel: IPMI message handler: version 39.2 Nov 8 00:28:25.861503 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:28:25.861514 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:28:25.819553 (sd-merge)[1421]: Merged extensions into '/usr'. Nov 8 00:28:25.861425 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:28:25.914253 kernel: ipmi device interface Nov 8 00:28:25.914320 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Nov 8 00:28:25.943754 systemd[1]: Reloading requested from client PID 1392 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:28:25.943764 systemd[1]: Reloading... Nov 8 00:28:25.948471 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Nov 8 00:28:25.978334 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Nov 8 00:28:25.978567 zram_generator::config[1535]: No configuration found. Nov 8 00:28:26.022253 kernel: iTCO_vendor_support: vendor-support=0 Nov 8 00:28:26.023246 kernel: ipmi_si: IPMI System Interface driver Nov 8 00:28:26.023279 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Nov 8 00:28:26.023387 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Nov 8 00:28:26.034134 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Nov 8 00:28:26.096168 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:28:26.098158 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Nov 8 00:28:26.114869 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Nov 8 00:28:26.131612 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Nov 8 00:28:26.150160 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 8 00:28:26.170739 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Nov 8 00:28:26.170839 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Nov 8 00:28:26.180653 systemd[1]: Reloading finished in 236 ms. Nov 8 00:28:26.186796 kernel: ipmi_si: Adding ACPI-specified kcs state machine Nov 8 00:28:26.206967 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Nov 8 00:28:26.243251 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Nov 8 00:28:26.285297 kernel: intel_rapl_common: Found RAPL domain package Nov 8 00:28:26.285342 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Nov 8 00:28:26.285439 kernel: intel_rapl_common: Found RAPL domain core Nov 8 00:28:26.312248 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Nov 8 00:28:26.312351 kernel: intel_rapl_common: Found RAPL domain dram Nov 8 00:28:26.381395 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:28:26.421250 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Nov 8 00:28:26.424406 systemd[1]: Starting ensure-sysext.service... Nov 8 00:28:26.438246 kernel: ipmi_ssif: IPMI SSIF Interface driver Nov 8 00:28:26.445842 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:28:26.457811 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:28:26.470031 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:28:26.470580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:28:26.470835 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:28:26.473138 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:28:26.477997 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:28:26.478205 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:28:26.478710 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:28:26.478880 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Nov 8 00:28:26.478914 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Nov 8 00:28:26.480602 systemd-tmpfiles[1600]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:28:26.480606 systemd-tmpfiles[1600]: Skipping /boot Nov 8 00:28:26.484725 systemd-tmpfiles[1600]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:28:26.484729 systemd-tmpfiles[1600]: Skipping /boot Nov 8 00:28:26.497575 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:28:26.517166 lvm[1606]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:28:26.520711 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:28:26.520973 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:28:26.521884 systemd[1]: Reloading requested from client PID 1597 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:28:26.521891 systemd[1]: Reloading... Nov 8 00:28:26.529000 systemd-networkd[1505]: lo: Link UP Nov 8 00:28:26.529004 systemd-networkd[1505]: lo: Gained carrier Nov 8 00:28:26.531622 systemd-networkd[1505]: bond0: netdev ready Nov 8 00:28:26.532579 systemd-networkd[1505]: Enumeration completed Nov 8 00:28:26.540880 systemd-networkd[1505]: enp2s0f0np0: Configuring with /etc/systemd/network/10-b8:ce:f6:07:a6:3a.network. Nov 8 00:28:26.562323 zram_generator::config[1647]: No configuration found. Nov 8 00:28:26.616346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:28:26.671281 systemd[1]: Reloading finished in 149 ms. Nov 8 00:28:26.685746 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:28:26.703497 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:28:26.714511 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:28:26.728624 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:28:26.753482 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:28:26.764307 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:28:26.772199 augenrules[1726]: No rules Nov 8 00:28:26.776104 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:28:26.788071 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:28:26.790000 lvm[1731]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:28:26.800205 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:28:26.812444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:28:26.823018 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:28:26.834933 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:28:26.844592 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:28:26.855544 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:28:26.866579 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:28:26.889604 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:26.889757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:28:26.890490 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:28:26.900962 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:28:26.912939 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:28:26.918058 systemd-resolved[1734]: Positive Trust Anchors: Nov 8 00:28:26.918065 systemd-resolved[1734]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:28:26.918089 systemd-resolved[1734]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:28:26.920634 systemd-resolved[1734]: Using system hostname 'ci-4081.3.6-n-8b27c00582'. Nov 8 00:28:26.922375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:28:26.923167 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:28:26.932337 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:28:26.932398 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:26.933065 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:28:26.944587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:28:26.944660 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:28:26.955548 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:28:26.955617 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:28:26.966534 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:28:26.966603 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:28:26.976557 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:28:26.988985 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:26.989109 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:28:27.000439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:28:27.010815 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:28:27.030433 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:28:27.041309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:28:27.041382 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:28:27.041432 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:27.042031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:28:27.042110 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:28:27.053500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:28:27.053575 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:28:27.064477 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:28:27.064543 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:28:27.076472 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:27.076606 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:28:27.084465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:28:27.094802 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:28:27.104777 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:28:27.116822 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:28:27.127373 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:28:27.127449 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:28:27.127501 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:28:27.128124 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:28:27.128200 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:28:27.139559 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:28:27.139627 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:28:27.149513 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:28:27.149580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:28:27.160472 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:28:27.160539 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:28:27.171154 systemd[1]: Finished ensure-sysext.service. Nov 8 00:28:27.180705 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:28:27.180736 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:28:27.195356 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:28:27.229864 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:28:27.240313 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:28:27.481325 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Nov 8 00:28:27.503772 systemd-networkd[1505]: enp2s0f1np1: Configuring with /etc/systemd/network/10-b8:ce:f6:07:a6:3b.network. Nov 8 00:28:27.504287 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Nov 8 00:28:27.751292 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Nov 8 00:28:27.773767 systemd-networkd[1505]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Nov 8 00:28:27.774254 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Nov 8 00:28:27.775292 systemd-networkd[1505]: enp2s0f0np0: Link UP Nov 8 00:28:27.775645 systemd-networkd[1505]: enp2s0f0np0: Gained carrier Nov 8 00:28:27.775699 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:28:27.795295 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 8 00:28:27.802313 systemd-networkd[1505]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:ce:f6:07:a6:3a.network. Nov 8 00:28:27.802570 systemd-networkd[1505]: enp2s0f1np1: Link UP Nov 8 00:28:27.802858 systemd-networkd[1505]: enp2s0f1np1: Gained carrier Nov 8 00:28:27.804466 systemd[1]: Reached target network.target - Network. Nov 8 00:28:27.812408 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:28:27.823431 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:28:27.824631 systemd-networkd[1505]: bond0: Link UP Nov 8 00:28:27.825092 systemd-networkd[1505]: bond0: Gained carrier Nov 8 00:28:27.825427 systemd-timesyncd[1768]: Network configuration changed, trying to establish connection. Nov 8 00:28:27.833613 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:28:27.844555 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:28:27.855877 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:28:27.865691 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:28:27.876477 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:28:27.895448 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:28:27.895535 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:28:27.910272 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Nov 8 00:28:27.910407 kernel: bond0: active interface up! Nov 8 00:28:27.938405 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:28:27.947797 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:28:27.958745 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:28:27.970229 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:28:27.981049 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:28:27.990476 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:28:28.000365 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:28:28.008403 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:28:28.008436 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:28:28.035311 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Nov 8 00:28:28.037387 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:28:28.048065 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:28:28.057891 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:28:28.066939 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:28:28.070724 coreos-metadata[1773]: Nov 08 00:28:28.070 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 8 00:28:28.076992 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:28:28.078602 jq[1777]: false Nov 8 00:28:28.079129 dbus-daemon[1774]: [system] SELinux support is enabled Nov 8 00:28:28.086434 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:28:28.087112 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:28:28.095649 extend-filesystems[1779]: Found loop4 Nov 8 00:28:28.095649 extend-filesystems[1779]: Found loop5 Nov 8 00:28:28.152482 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Nov 8 00:28:28.152501 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1436) Nov 8 00:28:28.152511 extend-filesystems[1779]: Found loop6 Nov 8 00:28:28.152511 extend-filesystems[1779]: Found loop7 Nov 8 00:28:28.152511 extend-filesystems[1779]: Found sda Nov 8 00:28:28.152511 extend-filesystems[1779]: Found sda1 Nov 8 00:28:28.152511 extend-filesystems[1779]: Found sda2 Nov 8 00:28:28.152511 extend-filesystems[1779]: Found sda3 Nov 8 00:28:28.152511 extend-filesystems[1779]: Found usr Nov 8 00:28:28.152511 extend-filesystems[1779]: Found sda4 Nov 8 00:28:28.152511 extend-filesystems[1779]: Found sda6 Nov 8 00:28:28.152511 extend-filesystems[1779]: Found sda7 Nov 8 00:28:28.152511 extend-filesystems[1779]: Found sda9 Nov 8 00:28:28.152511 extend-filesystems[1779]: Checking size of /dev/sda9 Nov 8 00:28:28.152511 extend-filesystems[1779]: Resized partition /dev/sda9 Nov 8 00:28:28.097021 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:28:28.287384 extend-filesystems[1793]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:28:28.162356 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:28:28.189954 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:28:28.219512 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:28:28.225137 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Nov 8 00:28:28.238685 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:28:28.239126 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:28:28.298876 update_engine[1804]: I20251108 00:28:28.287565 1804 main.cc:92] Flatcar Update Engine starting Nov 8 00:28:28.298876 update_engine[1804]: I20251108 00:28:28.288317 1804 update_check_scheduler.cc:74] Next update check in 10m23s Nov 8 00:28:28.262207 systemd-logind[1799]: Watching system buttons on /dev/input/event3 (Power Button) Nov 8 00:28:28.262217 systemd-logind[1799]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 8 00:28:28.262227 systemd-logind[1799]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Nov 8 00:28:28.262392 systemd-logind[1799]: New seat seat0. Nov 8 00:28:28.280743 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:28:28.298617 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:28:28.309053 jq[1805]: true Nov 8 00:28:28.318231 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:28:28.328007 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:28:28.328133 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:28:28.328346 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:28:28.328468 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:28:28.338868 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:28:28.338985 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:28:28.353284 (ntainerd)[1809]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:28:28.354702 jq[1808]: true Nov 8 00:28:28.358842 dbus-daemon[1774]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 8 00:28:28.361157 tar[1807]: linux-amd64/LICENSE Nov 8 00:28:28.361372 tar[1807]: linux-amd64/helm Nov 8 00:28:28.364515 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Nov 8 00:28:28.364651 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Nov 8 00:28:28.373569 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:28:28.384054 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:28:28.384218 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:28:28.395355 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:28:28.395478 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:28:28.410278 bash[1836]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:28:28.421447 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:28:28.433343 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:28:28.442137 locksmithd[1838]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:28:28.461451 systemd[1]: Starting sshkeys.service... Nov 8 00:28:28.472808 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:28:28.490584 sshd_keygen[1802]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:28:28.496466 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:28:28.507711 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:28:28.508553 coreos-metadata[1850]: Nov 08 00:28:28.508 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 8 00:28:28.518839 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:28:28.532758 containerd[1809]: time="2025-11-08T00:28:28.532712604Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:28:28.532772 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:28:28.532879 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:28:28.543682 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:28:28.545457 containerd[1809]: time="2025-11-08T00:28:28.545436605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:28.546133 containerd[1809]: time="2025-11-08T00:28:28.546117846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:28.546133 containerd[1809]: time="2025-11-08T00:28:28.546132445Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:28:28.546189 containerd[1809]: time="2025-11-08T00:28:28.546141560Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:28:28.546234 containerd[1809]: time="2025-11-08T00:28:28.546224178Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:28:28.546268 containerd[1809]: time="2025-11-08T00:28:28.546237261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:28.546300 containerd[1809]: time="2025-11-08T00:28:28.546286621Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:28.546327 containerd[1809]: time="2025-11-08T00:28:28.546301675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:28.546409 containerd[1809]: time="2025-11-08T00:28:28.546397922Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:28.546439 containerd[1809]: time="2025-11-08T00:28:28.546410759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:28.546439 containerd[1809]: time="2025-11-08T00:28:28.546418775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:28.546439 containerd[1809]: time="2025-11-08T00:28:28.546424410Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:28.546509 containerd[1809]: time="2025-11-08T00:28:28.546471077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:28.546596 containerd[1809]: time="2025-11-08T00:28:28.546587658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:28:28.546651 containerd[1809]: time="2025-11-08T00:28:28.546642154Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:28:28.546680 containerd[1809]: time="2025-11-08T00:28:28.546650814Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:28:28.546706 containerd[1809]: time="2025-11-08T00:28:28.546696923Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:28:28.546736 containerd[1809]: time="2025-11-08T00:28:28.546723388Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:28:28.561294 containerd[1809]: time="2025-11-08T00:28:28.561249769Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:28:28.561343 containerd[1809]: time="2025-11-08T00:28:28.561290303Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:28:28.561343 containerd[1809]: time="2025-11-08T00:28:28.561309192Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:28:28.561343 containerd[1809]: time="2025-11-08T00:28:28.561325145Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:28:28.561419 containerd[1809]: time="2025-11-08T00:28:28.561341146Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:28:28.561453 containerd[1809]: time="2025-11-08T00:28:28.561445570Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:28:28.561642 containerd[1809]: time="2025-11-08T00:28:28.561628730Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:28:28.561723 containerd[1809]: time="2025-11-08T00:28:28.561712302Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:28:28.561752 containerd[1809]: time="2025-11-08T00:28:28.561727230Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:28:28.561752 containerd[1809]: time="2025-11-08T00:28:28.561739594Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:28:28.561799 containerd[1809]: time="2025-11-08T00:28:28.561753387Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:28:28.561799 containerd[1809]: time="2025-11-08T00:28:28.561767320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:28:28.561799 containerd[1809]: time="2025-11-08T00:28:28.561780345Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:28:28.561875 containerd[1809]: time="2025-11-08T00:28:28.561793229Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:28:28.561875 containerd[1809]: time="2025-11-08T00:28:28.561809655Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:28:28.561875 containerd[1809]: time="2025-11-08T00:28:28.561824315Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:28:28.561875 containerd[1809]: time="2025-11-08T00:28:28.561837219Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:28:28.561875 containerd[1809]: time="2025-11-08T00:28:28.561855562Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:28:28.561994 containerd[1809]: time="2025-11-08T00:28:28.561873735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.561994 containerd[1809]: time="2025-11-08T00:28:28.561888384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.561994 containerd[1809]: time="2025-11-08T00:28:28.561899487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.561994 containerd[1809]: time="2025-11-08T00:28:28.561917470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.561994 containerd[1809]: time="2025-11-08T00:28:28.561929420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.561994 containerd[1809]: time="2025-11-08T00:28:28.561941964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.561994 containerd[1809]: time="2025-11-08T00:28:28.561952638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.561994 containerd[1809]: time="2025-11-08T00:28:28.561963719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.561994 containerd[1809]: time="2025-11-08T00:28:28.561974598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.561945 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:28:28.562273 containerd[1809]: time="2025-11-08T00:28:28.561998506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.562273 containerd[1809]: time="2025-11-08T00:28:28.562007708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.562273 containerd[1809]: time="2025-11-08T00:28:28.562014626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.562273 containerd[1809]: time="2025-11-08T00:28:28.562022295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.562273 containerd[1809]: time="2025-11-08T00:28:28.562030684Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:28:28.562273 containerd[1809]: time="2025-11-08T00:28:28.562045154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.562273 containerd[1809]: time="2025-11-08T00:28:28.562054919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.562273 containerd[1809]: time="2025-11-08T00:28:28.562061256Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:28:28.562273 containerd[1809]: time="2025-11-08T00:28:28.562089373Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:28:28.562273 containerd[1809]: time="2025-11-08T00:28:28.562101126Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:28:28.562273 containerd[1809]: time="2025-11-08T00:28:28.562108072Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:28:28.562273 containerd[1809]: time="2025-11-08T00:28:28.562114598Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:28:28.562273 containerd[1809]: time="2025-11-08T00:28:28.562120033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.562595 containerd[1809]: time="2025-11-08T00:28:28.562126442Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:28:28.562595 containerd[1809]: time="2025-11-08T00:28:28.562135130Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:28:28.562595 containerd[1809]: time="2025-11-08T00:28:28.562142687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:28:28.562667 containerd[1809]: time="2025-11-08T00:28:28.562304338Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:28:28.562667 containerd[1809]: time="2025-11-08T00:28:28.562339020Z" level=info msg="Connect containerd service" Nov 8 00:28:28.562667 containerd[1809]: time="2025-11-08T00:28:28.562357346Z" level=info msg="using legacy CRI server" Nov 8 00:28:28.562667 containerd[1809]: time="2025-11-08T00:28:28.562362507Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:28:28.562667 containerd[1809]: time="2025-11-08T00:28:28.562415644Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:28:28.562873 containerd[1809]: time="2025-11-08T00:28:28.562729677Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:28:28.562873 containerd[1809]: time="2025-11-08T00:28:28.562856314Z" level=info msg="Start subscribing containerd event" Nov 8 00:28:28.562933 containerd[1809]: time="2025-11-08T00:28:28.562885738Z" level=info msg="Start recovering state" Nov 8 00:28:28.562933 containerd[1809]: time="2025-11-08T00:28:28.562891178Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:28:28.562933 containerd[1809]: time="2025-11-08T00:28:28.562916301Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:28:28.562933 containerd[1809]: time="2025-11-08T00:28:28.562920306Z" level=info msg="Start event monitor" Nov 8 00:28:28.562933 containerd[1809]: time="2025-11-08T00:28:28.562932687Z" level=info msg="Start snapshots syncer" Nov 8 00:28:28.563051 containerd[1809]: time="2025-11-08T00:28:28.562938160Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:28:28.563082 containerd[1809]: time="2025-11-08T00:28:28.562944650Z" level=info msg="Start streaming server" Nov 8 00:28:28.563139 containerd[1809]: time="2025-11-08T00:28:28.563126063Z" level=info msg="containerd successfully booted in 0.030885s" Nov 8 00:28:28.572552 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:28:28.590489 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:28:28.614464 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Nov 8 00:28:28.624472 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:28:28.650249 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Nov 8 00:28:28.675315 extend-filesystems[1793]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 8 00:28:28.675315 extend-filesystems[1793]: old_desc_blocks = 1, new_desc_blocks = 56 Nov 8 00:28:28.675315 extend-filesystems[1793]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Nov 8 00:28:28.717327 extend-filesystems[1779]: Resized filesystem in /dev/sda9 Nov 8 00:28:28.717327 extend-filesystems[1779]: Found sdb Nov 8 00:28:28.732336 tar[1807]: linux-amd64/README.md Nov 8 00:28:28.676226 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:28:28.676332 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:28:28.727357 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:28:29.133429 systemd-networkd[1505]: bond0: Gained IPv6LL Nov 8 00:28:29.266134 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:28:29.280291 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:28:29.301522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:29.312014 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:28:29.331610 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:28:30.090146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:30.101896 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:28:30.499648 kubelet[1906]: E1108 00:28:30.499564 1906 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:28:30.500624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:28:30.500702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:28:31.060215 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:28:31.077487 systemd[1]: Started sshd@0-139.178.94.39:22-139.178.68.195:40810.service - OpenSSH per-connection server daemon (139.178.68.195:40810). Nov 8 00:28:32.442901 systemd-resolved[1734]: Clock change detected. Flushing caches. Nov 8 00:28:32.442962 systemd-timesyncd[1768]: Contacted time server 83.147.242.172:123 (0.flatcar.pool.ntp.org). Nov 8 00:28:32.442989 systemd-timesyncd[1768]: Initial clock synchronization to Sat 2025-11-08 00:28:32.442885 UTC. Nov 8 00:28:32.458097 sshd[1925]: Accepted publickey for core from 139.178.68.195 port 40810 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:28:32.459154 sshd[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:32.464427 systemd-logind[1799]: New session 1 of user core. Nov 8 00:28:32.465271 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:28:32.482376 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:28:32.494962 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:28:32.516430 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:28:32.525847 (systemd)[1929]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:28:32.601331 systemd[1929]: Queued start job for default target default.target. Nov 8 00:28:32.612700 systemd[1929]: Created slice app.slice - User Application Slice. Nov 8 00:28:32.612715 systemd[1929]: Reached target paths.target - Paths. Nov 8 00:28:32.612724 systemd[1929]: Reached target timers.target - Timers. Nov 8 00:28:32.613402 systemd[1929]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:28:32.619091 systemd[1929]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:28:32.619121 systemd[1929]: Reached target sockets.target - Sockets. Nov 8 00:28:32.619130 systemd[1929]: Reached target basic.target - Basic System. Nov 8 00:28:32.619174 systemd[1929]: Reached target default.target - Main User Target. Nov 8 00:28:32.619190 systemd[1929]: Startup finished in 90ms. Nov 8 00:28:32.619263 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:28:32.635287 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:28:32.700696 systemd[1]: Started sshd@1-139.178.94.39:22-139.178.68.195:40816.service - OpenSSH per-connection server daemon (139.178.68.195:40816). Nov 8 00:28:32.741638 sshd[1940]: Accepted publickey for core from 139.178.68.195 port 40816 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:28:32.742495 sshd[1940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:32.744700 systemd-logind[1799]: New session 2 of user core. Nov 8 00:28:32.766303 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:28:32.825536 sshd[1940]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:32.836645 systemd[1]: sshd@1-139.178.94.39:22-139.178.68.195:40816.service: Deactivated successfully. Nov 8 00:28:32.838310 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:28:32.840031 systemd-logind[1799]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:28:32.841695 systemd[1]: Started sshd@2-139.178.94.39:22-139.178.68.195:40830.service - OpenSSH per-connection server daemon (139.178.68.195:40830). Nov 8 00:28:32.857902 systemd-logind[1799]: Removed session 2. Nov 8 00:28:32.870208 kernel: mlx5_core 0000:02:00.0: lag map: port 1:1 port 2:2 Nov 8 00:28:32.870335 kernel: mlx5_core 0000:02:00.0: shared_fdb:0 mode:queue_affinity Nov 8 00:28:32.926504 sshd[1947]: Accepted publickey for core from 139.178.68.195 port 40830 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:28:32.927179 sshd[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:32.929613 systemd-logind[1799]: New session 3 of user core. Nov 8 00:28:32.949589 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:28:33.028964 sshd[1947]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:33.036386 systemd[1]: sshd@2-139.178.94.39:22-139.178.68.195:40830.service: Deactivated successfully. Nov 8 00:28:33.040242 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:28:33.041977 systemd-logind[1799]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:28:33.044173 systemd-logind[1799]: Removed session 3. Nov 8 00:28:33.620608 coreos-metadata[1850]: Nov 08 00:28:33.620 INFO Fetch successful Nov 8 00:28:33.651115 unknown[1850]: wrote ssh authorized keys file for user: core Nov 8 00:28:33.677998 update-ssh-keys[1958]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:28:33.678558 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:28:33.690030 systemd[1]: Finished sshkeys.service. Nov 8 00:28:34.941015 coreos-metadata[1773]: Nov 08 00:28:34.940 INFO Fetch successful Nov 8 00:28:34.976520 login[1882]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:28:34.976761 login[1883]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:28:34.979280 systemd-logind[1799]: New session 5 of user core. Nov 8 00:28:34.979573 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:28:34.980599 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:28:34.981353 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Nov 8 00:28:34.982438 systemd-logind[1799]: New session 4 of user core. Nov 8 00:28:34.982879 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:28:35.414005 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Nov 8 00:28:35.414664 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:28:35.414920 systemd[1]: Startup finished in 1.855s (kernel) + 36.234s (initrd) + 11.986s (userspace) = 50.076s. Nov 8 00:28:41.940418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:28:41.952423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:42.226045 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:42.228277 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:28:42.250488 kubelet[2002]: E1108 00:28:42.250460 2002 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:28:42.252604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:28:42.252694 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:28:43.061439 systemd[1]: Started sshd@3-139.178.94.39:22-139.178.68.195:49560.service - OpenSSH per-connection server daemon (139.178.68.195:49560). Nov 8 00:28:43.090561 sshd[2022]: Accepted publickey for core from 139.178.68.195 port 49560 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:28:43.091204 sshd[2022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:43.093619 systemd-logind[1799]: New session 6 of user core. Nov 8 00:28:43.106414 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:28:43.157706 sshd[2022]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:43.171750 systemd[1]: sshd@3-139.178.94.39:22-139.178.68.195:49560.service: Deactivated successfully. Nov 8 00:28:43.172537 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:28:43.173239 systemd-logind[1799]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:28:43.173906 systemd[1]: Started sshd@4-139.178.94.39:22-139.178.68.195:49564.service - OpenSSH per-connection server daemon (139.178.68.195:49564). Nov 8 00:28:43.174486 systemd-logind[1799]: Removed session 6. Nov 8 00:28:43.218095 sshd[2029]: Accepted publickey for core from 139.178.68.195 port 49564 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:28:43.219114 sshd[2029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:43.222876 systemd-logind[1799]: New session 7 of user core. Nov 8 00:28:43.233375 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:28:43.287299 sshd[2029]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:43.309066 systemd[1]: sshd@4-139.178.94.39:22-139.178.68.195:49564.service: Deactivated successfully. Nov 8 00:28:43.310168 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:28:43.310826 systemd-logind[1799]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:28:43.311453 systemd[1]: Started sshd@5-139.178.94.39:22-139.178.68.195:49576.service - OpenSSH per-connection server daemon (139.178.68.195:49576). Nov 8 00:28:43.311861 systemd-logind[1799]: Removed session 7. Nov 8 00:28:43.343429 sshd[2036]: Accepted publickey for core from 139.178.68.195 port 49576 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:28:43.344114 sshd[2036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:43.346812 systemd-logind[1799]: New session 8 of user core. Nov 8 00:28:43.358391 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:28:43.410154 sshd[2036]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:43.419756 systemd[1]: sshd@5-139.178.94.39:22-139.178.68.195:49576.service: Deactivated successfully. Nov 8 00:28:43.420511 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:28:43.421131 systemd-logind[1799]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:28:43.421809 systemd[1]: Started sshd@6-139.178.94.39:22-139.178.68.195:49590.service - OpenSSH per-connection server daemon (139.178.68.195:49590). Nov 8 00:28:43.422318 systemd-logind[1799]: Removed session 8. Nov 8 00:28:43.465068 sshd[2043]: Accepted publickey for core from 139.178.68.195 port 49590 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:28:43.465962 sshd[2043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:43.469224 systemd-logind[1799]: New session 9 of user core. Nov 8 00:28:43.479373 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:28:43.545982 sudo[2046]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:28:43.546131 sudo[2046]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:43.566636 sudo[2046]: pam_unix(sudo:session): session closed for user root Nov 8 00:28:43.567418 sshd[2043]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:43.586685 systemd[1]: sshd@6-139.178.94.39:22-139.178.68.195:49590.service: Deactivated successfully. Nov 8 00:28:43.590534 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:28:43.593935 systemd-logind[1799]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:28:43.610008 systemd[1]: Started sshd@7-139.178.94.39:22-139.178.68.195:49600.service - OpenSSH per-connection server daemon (139.178.68.195:49600). Nov 8 00:28:43.612486 systemd-logind[1799]: Removed session 9. Nov 8 00:28:43.671176 sshd[2051]: Accepted publickey for core from 139.178.68.195 port 49600 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:28:43.672002 sshd[2051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:43.674933 systemd-logind[1799]: New session 10 of user core. Nov 8 00:28:43.684382 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:28:43.734883 sudo[2056]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:28:43.735218 sudo[2056]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:43.739310 sudo[2056]: pam_unix(sudo:session): session closed for user root Nov 8 00:28:43.747256 sudo[2055]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:28:43.747836 sudo[2055]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:43.782915 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:28:43.786289 auditctl[2059]: No rules Nov 8 00:28:43.787187 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:28:43.787660 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:28:43.793443 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:28:43.841072 augenrules[2077]: No rules Nov 8 00:28:43.842763 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:28:43.843999 sudo[2055]: pam_unix(sudo:session): session closed for user root Nov 8 00:28:43.845812 sshd[2051]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:43.875210 systemd[1]: sshd@7-139.178.94.39:22-139.178.68.195:49600.service: Deactivated successfully. Nov 8 00:28:43.878866 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:28:43.882256 systemd-logind[1799]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:28:43.899962 systemd[1]: Started sshd@8-139.178.94.39:22-139.178.68.195:49616.service - OpenSSH per-connection server daemon (139.178.68.195:49616). Nov 8 00:28:43.902699 systemd-logind[1799]: Removed session 10. Nov 8 00:28:43.954575 sshd[2085]: Accepted publickey for core from 139.178.68.195 port 49616 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:28:43.955202 sshd[2085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:43.957622 systemd-logind[1799]: New session 11 of user core. Nov 8 00:28:43.974418 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:28:44.026303 sudo[2088]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:28:44.026456 sudo[2088]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:28:44.293515 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:28:44.293603 (dockerd)[2112]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:28:44.544694 dockerd[2112]: time="2025-11-08T00:28:44.544603413Z" level=info msg="Starting up" Nov 8 00:28:44.614469 dockerd[2112]: time="2025-11-08T00:28:44.614420326Z" level=info msg="Loading containers: start." Nov 8 00:28:44.704147 kernel: Initializing XFRM netlink socket Nov 8 00:28:44.758166 systemd-networkd[1505]: docker0: Link UP Nov 8 00:28:44.785682 dockerd[2112]: time="2025-11-08T00:28:44.785560542Z" level=info msg="Loading containers: done." Nov 8 00:28:44.813893 dockerd[2112]: time="2025-11-08T00:28:44.813841853Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:28:44.813959 dockerd[2112]: time="2025-11-08T00:28:44.813896113Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:28:44.813959 dockerd[2112]: time="2025-11-08T00:28:44.813949343Z" level=info msg="Daemon has completed initialization" Nov 8 00:28:44.814166 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3637567731-merged.mount: Deactivated successfully. Nov 8 00:28:44.829337 dockerd[2112]: time="2025-11-08T00:28:44.829278564Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:28:44.829377 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:28:45.498117 containerd[1809]: time="2025-11-08T00:28:45.498068467Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 8 00:28:46.520322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765715335.mount: Deactivated successfully. Nov 8 00:28:47.268733 containerd[1809]: time="2025-11-08T00:28:47.268678305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:47.268949 containerd[1809]: time="2025-11-08T00:28:47.268898985Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 8 00:28:47.269423 containerd[1809]: time="2025-11-08T00:28:47.269383031Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:47.271398 containerd[1809]: time="2025-11-08T00:28:47.271356813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:47.271918 containerd[1809]: time="2025-11-08T00:28:47.271869910Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.773780247s" Nov 8 00:28:47.271918 containerd[1809]: time="2025-11-08T00:28:47.271887865Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 8 00:28:47.272247 containerd[1809]: time="2025-11-08T00:28:47.272203905Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 8 00:28:48.281424 containerd[1809]: time="2025-11-08T00:28:48.281399553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:48.281678 containerd[1809]: time="2025-11-08T00:28:48.281623921Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 8 00:28:48.282040 containerd[1809]: time="2025-11-08T00:28:48.282000378Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:48.283962 containerd[1809]: time="2025-11-08T00:28:48.283920580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:48.284496 containerd[1809]: time="2025-11-08T00:28:48.284455136Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.012235252s" Nov 8 00:28:48.284496 containerd[1809]: time="2025-11-08T00:28:48.284471014Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 8 00:28:48.284797 containerd[1809]: time="2025-11-08T00:28:48.284756737Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 8 00:28:49.061441 containerd[1809]: time="2025-11-08T00:28:49.061390193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:49.061631 containerd[1809]: time="2025-11-08T00:28:49.061578304Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 8 00:28:49.062805 containerd[1809]: time="2025-11-08T00:28:49.062791888Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:49.064415 containerd[1809]: time="2025-11-08T00:28:49.064400666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:49.065040 containerd[1809]: time="2025-11-08T00:28:49.065026096Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 780.253086ms" Nov 8 00:28:49.065064 containerd[1809]: time="2025-11-08T00:28:49.065042818Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 8 00:28:49.065319 containerd[1809]: time="2025-11-08T00:28:49.065271331Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 8 00:28:50.001629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount762950572.mount: Deactivated successfully. Nov 8 00:28:50.134190 containerd[1809]: time="2025-11-08T00:28:50.134165351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:50.134431 containerd[1809]: time="2025-11-08T00:28:50.134412024Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 8 00:28:50.134858 containerd[1809]: time="2025-11-08T00:28:50.134844987Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:50.136281 containerd[1809]: time="2025-11-08T00:28:50.136073367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:50.137092 containerd[1809]: time="2025-11-08T00:28:50.137077202Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.071785526s" Nov 8 00:28:50.137123 containerd[1809]: time="2025-11-08T00:28:50.137094174Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 8 00:28:50.137335 containerd[1809]: time="2025-11-08T00:28:50.137323852Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 8 00:28:50.656407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3475676563.mount: Deactivated successfully. Nov 8 00:28:51.315391 containerd[1809]: time="2025-11-08T00:28:51.315338268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:51.315602 containerd[1809]: time="2025-11-08T00:28:51.315514605Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 8 00:28:51.316008 containerd[1809]: time="2025-11-08T00:28:51.315966800Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:51.317684 containerd[1809]: time="2025-11-08T00:28:51.317665029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:51.318823 containerd[1809]: time="2025-11-08T00:28:51.318810169Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.181470959s" Nov 8 00:28:51.318849 containerd[1809]: time="2025-11-08T00:28:51.318823616Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 8 00:28:51.319086 containerd[1809]: time="2025-11-08T00:28:51.319076853Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 8 00:28:51.986088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1621339238.mount: Deactivated successfully. Nov 8 00:28:51.987257 containerd[1809]: time="2025-11-08T00:28:51.987238157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:51.987377 containerd[1809]: time="2025-11-08T00:28:51.987352895Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 8 00:28:51.987817 containerd[1809]: time="2025-11-08T00:28:51.987805287Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:51.988915 containerd[1809]: time="2025-11-08T00:28:51.988883882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:51.989412 containerd[1809]: time="2025-11-08T00:28:51.989377786Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 670.286293ms" Nov 8 00:28:51.989412 containerd[1809]: time="2025-11-08T00:28:51.989408961Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 8 00:28:51.989784 containerd[1809]: time="2025-11-08T00:28:51.989772584Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 8 00:28:52.438731 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:28:52.450306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:52.717596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:52.719808 (kubelet)[2421]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:28:52.741255 kubelet[2421]: E1108 00:28:52.741157 2421 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:28:52.742411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:28:52.742504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:28:54.115914 containerd[1809]: time="2025-11-08T00:28:54.115855197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:54.116127 containerd[1809]: time="2025-11-08T00:28:54.116104298Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 8 00:28:54.116548 containerd[1809]: time="2025-11-08T00:28:54.116508027Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:54.118737 containerd[1809]: time="2025-11-08T00:28:54.118695848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:28:54.119334 containerd[1809]: time="2025-11-08T00:28:54.119292729Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.129505999s" Nov 8 00:28:54.119334 containerd[1809]: time="2025-11-08T00:28:54.119309113Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 8 00:28:56.132501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:56.151494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:56.164052 systemd[1]: Reloading requested from client PID 2534 ('systemctl') (unit session-11.scope)... Nov 8 00:28:56.164074 systemd[1]: Reloading... Nov 8 00:28:56.207202 zram_generator::config[2573]: No configuration found. Nov 8 00:28:56.272902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:28:56.334228 systemd[1]: Reloading finished in 169 ms. Nov 8 00:28:56.356388 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:28:56.356428 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:28:56.356551 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:56.373861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:28:56.614492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:28:56.620469 (kubelet)[2637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:28:56.648603 kubelet[2637]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:28:56.648603 kubelet[2637]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:28:56.648825 kubelet[2637]: I1108 00:28:56.648605 2637 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:28:56.872007 kubelet[2637]: I1108 00:28:56.871943 2637 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:28:56.872007 kubelet[2637]: I1108 00:28:56.871952 2637 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:28:56.875919 kubelet[2637]: I1108 00:28:56.875889 2637 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:28:56.875919 kubelet[2637]: I1108 00:28:56.875914 2637 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:28:56.876017 kubelet[2637]: I1108 00:28:56.876011 2637 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:28:56.879112 kubelet[2637]: E1108 00:28:56.879094 2637 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.94.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.94.39:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:28:56.879353 kubelet[2637]: I1108 00:28:56.879344 2637 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:28:56.882216 kubelet[2637]: E1108 00:28:56.882173 2637 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:28:56.882216 kubelet[2637]: I1108 00:28:56.882201 2637 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:28:56.890200 kubelet[2637]: I1108 00:28:56.890152 2637 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:28:56.890358 kubelet[2637]: I1108 00:28:56.890315 2637 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:28:56.890438 kubelet[2637]: I1108 00:28:56.890329 2637 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-8b27c00582","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:28:56.890438 kubelet[2637]: I1108 00:28:56.890410 2637 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:28:56.890438 kubelet[2637]: I1108 00:28:56.890415 2637 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:28:56.890533 kubelet[2637]: I1108 00:28:56.890466 2637 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:28:56.891515 kubelet[2637]: I1108 00:28:56.891508 2637 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:28:56.892895 kubelet[2637]: I1108 00:28:56.892888 2637 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:28:56.892919 kubelet[2637]: I1108 00:28:56.892896 2637 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:28:56.892919 kubelet[2637]: I1108 00:28:56.892908 2637 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:28:56.892919 kubelet[2637]: I1108 00:28:56.892914 2637 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:28:56.893319 kubelet[2637]: E1108 00:28:56.893305 2637 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.94.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.94.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:28:56.893357 kubelet[2637]: E1108 00:28:56.893343 2637 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.94.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-8b27c00582&limit=500&resourceVersion=0\": dial tcp 139.178.94.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:28:56.894487 kubelet[2637]: I1108 00:28:56.894423 2637 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:28:56.894828 kubelet[2637]: I1108 00:28:56.894792 2637 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:28:56.894828 kubelet[2637]: I1108 00:28:56.894810 2637 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:28:56.894873 kubelet[2637]: W1108 00:28:56.894833 2637 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:28:56.896296 kubelet[2637]: I1108 00:28:56.896289 2637 server.go:1262] "Started kubelet" Nov 8 00:28:56.896340 kubelet[2637]: I1108 00:28:56.896320 2637 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:28:56.896492 kubelet[2637]: I1108 00:28:56.896450 2637 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:28:56.896537 kubelet[2637]: I1108 00:28:56.896520 2637 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:28:56.896713 kubelet[2637]: I1108 00:28:56.896706 2637 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:28:56.896861 kubelet[2637]: I1108 00:28:56.896848 2637 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:28:56.899928 kubelet[2637]: I1108 00:28:56.899749 2637 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:28:56.899928 kubelet[2637]: I1108 00:28:56.899758 2637 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:28:56.899928 kubelet[2637]: E1108 00:28:56.899789 2637 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-8b27c00582\" not found" Nov 8 00:28:56.899928 kubelet[2637]: I1108 00:28:56.899882 2637 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:28:56.899928 kubelet[2637]: I1108 00:28:56.899898 2637 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:28:56.900170 kubelet[2637]: E1108 00:28:56.900141 2637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8b27c00582?timeout=10s\": dial tcp 139.178.94.39:6443: connect: connection refused" interval="200ms" Nov 8 00:28:56.900382 kubelet[2637]: I1108 00:28:56.900369 2637 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:28:56.900420 kubelet[2637]: E1108 00:28:56.900403 2637 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.94.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.94.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:28:56.900458 kubelet[2637]: I1108 00:28:56.900443 2637 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:28:56.901540 kubelet[2637]: E1108 00:28:56.901529 2637 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:28:56.901572 kubelet[2637]: I1108 00:28:56.901559 2637 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:28:56.901691 kubelet[2637]: I1108 00:28:56.901684 2637 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:28:56.902291 kubelet[2637]: E1108 00:28:56.901386 2637 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.94.39:6443/api/v1/namespaces/default/events\": dial tcp 139.178.94.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-8b27c00582.1875e08fae65ad18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-8b27c00582,UID:ci-4081.3.6-n-8b27c00582,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-8b27c00582,},FirstTimestamp:2025-11-08 00:28:56.8962614 +0000 UTC m=+0.272285341,LastTimestamp:2025-11-08 00:28:56.8962614 +0000 UTC m=+0.272285341,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-8b27c00582,}" Nov 8 00:28:56.910114 kubelet[2637]: I1108 00:28:56.910097 2637 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:28:56.910623 kubelet[2637]: I1108 00:28:56.910615 2637 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:28:56.910653 kubelet[2637]: I1108 00:28:56.910624 2637 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:28:56.910653 kubelet[2637]: I1108 00:28:56.910637 2637 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:28:56.910690 kubelet[2637]: E1108 00:28:56.910663 2637 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:28:56.910918 kubelet[2637]: E1108 00:28:56.910905 2637 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.94.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.94.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:28:56.924302 kubelet[2637]: I1108 00:28:56.924240 2637 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:28:56.924302 kubelet[2637]: I1108 00:28:56.924275 2637 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:28:56.924557 kubelet[2637]: I1108 00:28:56.924319 2637 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:28:56.925626 kubelet[2637]: I1108 00:28:56.925591 2637 policy_none.go:49] "None policy: Start" Nov 8 00:28:56.925626 kubelet[2637]: I1108 00:28:56.925608 2637 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:28:56.925626 kubelet[2637]: I1108 00:28:56.925621 2637 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:28:56.926290 kubelet[2637]: I1108 00:28:56.926200 2637 policy_none.go:47] "Start" Nov 8 00:28:56.929011 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:28:56.952886 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:28:56.961268 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:28:56.975959 kubelet[2637]: E1108 00:28:56.975858 2637 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:28:56.976364 kubelet[2637]: I1108 00:28:56.976309 2637 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:28:56.976364 kubelet[2637]: I1108 00:28:56.976338 2637 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:28:56.976722 kubelet[2637]: I1108 00:28:56.976673 2637 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:28:56.978135 kubelet[2637]: E1108 00:28:56.978089 2637 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:28:56.978335 kubelet[2637]: E1108 00:28:56.978190 2637 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-8b27c00582\" not found" Nov 8 00:28:57.021345 systemd[1]: Created slice kubepods-burstable-pod26fe7a40135bf29e0e7fe5a4be904e69.slice - libcontainer container kubepods-burstable-pod26fe7a40135bf29e0e7fe5a4be904e69.slice. Nov 8 00:28:57.035796 kubelet[2637]: E1108 00:28:57.035761 2637 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8b27c00582\" not found" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.037485 systemd[1]: Created slice kubepods-burstable-pod1513553acaf2f466f8fb77f302b16960.slice - libcontainer container kubepods-burstable-pod1513553acaf2f466f8fb77f302b16960.slice. Nov 8 00:28:57.049192 kubelet[2637]: E1108 00:28:57.049146 2637 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8b27c00582\" not found" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.051464 systemd[1]: Created slice kubepods-burstable-pod4efc25b0429731247e3ac702423305af.slice - libcontainer container kubepods-burstable-pod4efc25b0429731247e3ac702423305af.slice. Nov 8 00:28:57.052818 kubelet[2637]: E1108 00:28:57.052778 2637 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-8b27c00582\" not found" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.080661 kubelet[2637]: I1108 00:28:57.080574 2637 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.081448 kubelet[2637]: E1108 00:28:57.081339 2637 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.39:6443/api/v1/nodes\": dial tcp 139.178.94.39:6443: connect: connection refused" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.101008 kubelet[2637]: I1108 00:28:57.100932 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1513553acaf2f466f8fb77f302b16960-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8b27c00582\" (UID: \"1513553acaf2f466f8fb77f302b16960\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.101008 kubelet[2637]: I1108 00:28:57.101015 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1513553acaf2f466f8fb77f302b16960-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8b27c00582\" (UID: \"1513553acaf2f466f8fb77f302b16960\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.101464 kubelet[2637]: I1108 00:28:57.101068 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4efc25b0429731247e3ac702423305af-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8b27c00582\" (UID: \"4efc25b0429731247e3ac702423305af\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.101464 kubelet[2637]: I1108 00:28:57.101118 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4efc25b0429731247e3ac702423305af-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-8b27c00582\" (UID: \"4efc25b0429731247e3ac702423305af\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.101464 kubelet[2637]: I1108 00:28:57.101187 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4efc25b0429731247e3ac702423305af-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8b27c00582\" (UID: \"4efc25b0429731247e3ac702423305af\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.101464 kubelet[2637]: I1108 00:28:57.101266 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4efc25b0429731247e3ac702423305af-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-8b27c00582\" (UID: \"4efc25b0429731247e3ac702423305af\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.101464 kubelet[2637]: I1108 00:28:57.101312 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/26fe7a40135bf29e0e7fe5a4be904e69-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-8b27c00582\" (UID: \"26fe7a40135bf29e0e7fe5a4be904e69\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.101865 kubelet[2637]: E1108 00:28:57.101347 2637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8b27c00582?timeout=10s\": dial tcp 139.178.94.39:6443: connect: connection refused" interval="400ms" Nov 8 00:28:57.101865 kubelet[2637]: I1108 00:28:57.101360 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1513553acaf2f466f8fb77f302b16960-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-8b27c00582\" (UID: \"1513553acaf2f466f8fb77f302b16960\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.101865 kubelet[2637]: I1108 00:28:57.101503 2637 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4efc25b0429731247e3ac702423305af-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-8b27c00582\" (UID: \"4efc25b0429731247e3ac702423305af\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.285825 kubelet[2637]: I1108 00:28:57.285726 2637 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.286581 kubelet[2637]: E1108 00:28:57.286471 2637 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.39:6443/api/v1/nodes\": dial tcp 139.178.94.39:6443: connect: connection refused" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.339115 containerd[1809]: time="2025-11-08T00:28:57.339045100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-8b27c00582,Uid:26fe7a40135bf29e0e7fe5a4be904e69,Namespace:kube-system,Attempt:0,}" Nov 8 00:28:57.351755 containerd[1809]: time="2025-11-08T00:28:57.351720653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-8b27c00582,Uid:1513553acaf2f466f8fb77f302b16960,Namespace:kube-system,Attempt:0,}" Nov 8 00:28:57.353750 containerd[1809]: time="2025-11-08T00:28:57.353692205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-8b27c00582,Uid:4efc25b0429731247e3ac702423305af,Namespace:kube-system,Attempt:0,}" Nov 8 00:28:57.501883 kubelet[2637]: E1108 00:28:57.501825 2637 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.94.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-8b27c00582?timeout=10s\": dial tcp 139.178.94.39:6443: connect: connection refused" interval="800ms" Nov 8 00:28:57.688614 kubelet[2637]: I1108 00:28:57.688576 2637 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.688797 kubelet[2637]: E1108 00:28:57.688770 2637 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.94.39:6443/api/v1/nodes\": dial tcp 139.178.94.39:6443: connect: connection refused" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:28:57.767173 kubelet[2637]: E1108 00:28:57.767122 2637 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.94.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.94.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:28:57.825346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3081154942.mount: Deactivated successfully. Nov 8 00:28:57.826862 containerd[1809]: time="2025-11-08T00:28:57.826819002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:28:57.827132 containerd[1809]: time="2025-11-08T00:28:57.827089001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:28:57.827793 containerd[1809]: time="2025-11-08T00:28:57.827743191Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:28:57.828380 containerd[1809]: time="2025-11-08T00:28:57.828345668Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:28:57.828516 containerd[1809]: time="2025-11-08T00:28:57.828460596Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:28:57.829106 containerd[1809]: time="2025-11-08T00:28:57.829061935Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:28:57.829234 containerd[1809]: time="2025-11-08T00:28:57.829181156Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:28:57.832526 containerd[1809]: time="2025-11-08T00:28:57.832481913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:28:57.832975 containerd[1809]: time="2025-11-08T00:28:57.832935637Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 493.802465ms" Nov 8 00:28:57.833443 containerd[1809]: time="2025-11-08T00:28:57.833397236Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 481.604405ms" Nov 8 00:28:57.834257 containerd[1809]: time="2025-11-08T00:28:57.834221994Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.466225ms" Nov 8 00:28:57.931608 kubelet[2637]: E1108 00:28:57.931556 2637 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.94.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.94.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:28:57.941164 containerd[1809]: time="2025-11-08T00:28:57.940898786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:28:57.941164 containerd[1809]: time="2025-11-08T00:28:57.941121753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:28:57.941164 containerd[1809]: time="2025-11-08T00:28:57.941131757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:57.941164 containerd[1809]: time="2025-11-08T00:28:57.940952157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:28:57.941317 containerd[1809]: time="2025-11-08T00:28:57.941176094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:28:57.941317 containerd[1809]: time="2025-11-08T00:28:57.941183587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:57.941317 containerd[1809]: time="2025-11-08T00:28:57.941187784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:57.941317 containerd[1809]: time="2025-11-08T00:28:57.941190211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:28:57.941317 containerd[1809]: time="2025-11-08T00:28:57.941222479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:28:57.941317 containerd[1809]: time="2025-11-08T00:28:57.941232160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:57.941317 containerd[1809]: time="2025-11-08T00:28:57.941253285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:57.941317 containerd[1809]: time="2025-11-08T00:28:57.941272289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:28:57.957453 systemd[1]: Started cri-containerd-0c8045597a691baf8178227f09a3ad172361057ce40b3907c154f3b37ebf641c.scope - libcontainer container 0c8045597a691baf8178227f09a3ad172361057ce40b3907c154f3b37ebf641c. Nov 8 00:28:57.958118 systemd[1]: Started cri-containerd-4216dab88a425193e55a3365d7c9a9965029e2ef1cfaf79ee293d037c001545c.scope - libcontainer container 4216dab88a425193e55a3365d7c9a9965029e2ef1cfaf79ee293d037c001545c. Nov 8 00:28:57.958798 systemd[1]: Started cri-containerd-ee160d7b99e44d62b5b5df22cc8d208919dc8236cca1576a5dc52e8b8a84bbf4.scope - libcontainer container ee160d7b99e44d62b5b5df22cc8d208919dc8236cca1576a5dc52e8b8a84bbf4. Nov 8 00:28:57.979560 containerd[1809]: time="2025-11-08T00:28:57.979538203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-8b27c00582,Uid:4efc25b0429731247e3ac702423305af,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c8045597a691baf8178227f09a3ad172361057ce40b3907c154f3b37ebf641c\"" Nov 8 00:28:57.980916 containerd[1809]: time="2025-11-08T00:28:57.980900665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-8b27c00582,Uid:1513553acaf2f466f8fb77f302b16960,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee160d7b99e44d62b5b5df22cc8d208919dc8236cca1576a5dc52e8b8a84bbf4\"" Nov 8 00:28:57.981536 containerd[1809]: time="2025-11-08T00:28:57.981524394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-8b27c00582,Uid:26fe7a40135bf29e0e7fe5a4be904e69,Namespace:kube-system,Attempt:0,} returns sandbox id \"4216dab88a425193e55a3365d7c9a9965029e2ef1cfaf79ee293d037c001545c\"" Nov 8 00:28:57.982085 containerd[1809]: time="2025-11-08T00:28:57.982073455Z" level=info msg="CreateContainer within sandbox \"0c8045597a691baf8178227f09a3ad172361057ce40b3907c154f3b37ebf641c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:28:57.982446 containerd[1809]: time="2025-11-08T00:28:57.982435782Z" level=info msg="CreateContainer within sandbox \"ee160d7b99e44d62b5b5df22cc8d208919dc8236cca1576a5dc52e8b8a84bbf4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:28:57.983092 containerd[1809]: time="2025-11-08T00:28:57.983079442Z" level=info msg="CreateContainer within sandbox \"4216dab88a425193e55a3365d7c9a9965029e2ef1cfaf79ee293d037c001545c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:28:57.989642 containerd[1809]: time="2025-11-08T00:28:57.989593984Z" level=info msg="CreateContainer within sandbox \"4216dab88a425193e55a3365d7c9a9965029e2ef1cfaf79ee293d037c001545c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"17a92fab8aa49e3b2c2fc5209973c6a0cf37620b7913446a9eb049b3a7476e13\"" Nov 8 00:28:57.989894 containerd[1809]: time="2025-11-08T00:28:57.989862808Z" level=info msg="CreateContainer within sandbox \"0c8045597a691baf8178227f09a3ad172361057ce40b3907c154f3b37ebf641c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"92beb7c998db80301efea24a7e3071d75fad3d327fec2ebb71a0ad6cc54fc2a5\"" Nov 8 00:28:57.989894 containerd[1809]: time="2025-11-08T00:28:57.989882288Z" level=info msg="StartContainer for \"17a92fab8aa49e3b2c2fc5209973c6a0cf37620b7913446a9eb049b3a7476e13\"" Nov 8 00:28:57.990025 containerd[1809]: time="2025-11-08T00:28:57.990014602Z" level=info msg="StartContainer for \"92beb7c998db80301efea24a7e3071d75fad3d327fec2ebb71a0ad6cc54fc2a5\"" Nov 8 00:28:57.990048 containerd[1809]: time="2025-11-08T00:28:57.990022704Z" level=info msg="CreateContainer within sandbox \"ee160d7b99e44d62b5b5df22cc8d208919dc8236cca1576a5dc52e8b8a84bbf4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a67c18df164a1758b64ab46ce31825a19ca86b6a32b9ff451edb1a7c8e719b50\"" Nov 8 00:28:57.990186 containerd[1809]: time="2025-11-08T00:28:57.990174690Z" level=info msg="StartContainer for \"a67c18df164a1758b64ab46ce31825a19ca86b6a32b9ff451edb1a7c8e719b50\"" Nov 8 00:28:58.013462 systemd[1]: Started cri-containerd-17a92fab8aa49e3b2c2fc5209973c6a0cf37620b7913446a9eb049b3a7476e13.scope - libcontainer container 17a92fab8aa49e3b2c2fc5209973c6a0cf37620b7913446a9eb049b3a7476e13. Nov 8 00:28:58.014094 systemd[1]: Started cri-containerd-92beb7c998db80301efea24a7e3071d75fad3d327fec2ebb71a0ad6cc54fc2a5.scope - libcontainer container 92beb7c998db80301efea24a7e3071d75fad3d327fec2ebb71a0ad6cc54fc2a5. Nov 8 00:28:58.014615 systemd[1]: Started cri-containerd-a67c18df164a1758b64ab46ce31825a19ca86b6a32b9ff451edb1a7c8e719b50.scope - libcontainer container a67c18df164a1758b64ab46ce31825a19ca86b6a32b9ff451edb1a7c8e719b50. Nov 8 00:28:58.042009 containerd[1809]: time="2025-11-08T00:28:58.041977756Z" level=info msg="StartContainer for \"17a92fab8aa49e3b2c2fc5209973c6a0cf37620b7913446a9eb049b3a7476e13\" returns successfully" Nov 8 00:28:58.042108 containerd[1809]: time="2025-11-08T00:28:58.042033756Z" level=info msg="StartContainer for \"92beb7c998db80301efea24a7e3071d75fad3d327fec2ebb71a0ad6cc54fc2a5\" returns successfully" Nov 8 00:28:58.042108 containerd[1809]: time="2025-11-08T00:28:58.041986283Z" level=info msg="StartContainer for \"a67c18df164a1758b64ab46ce31825a19ca86b6a32b9ff451edb1a7c8e719b50\" returns successfully" Nov 8 00:28:58.490946 kubelet[2637]: I1108 00:28:58.490898 2637 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:28:58.666043 kubelet[2637]: E1108 00:28:58.666018 2637 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-8b27c00582\" not found" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:28:58.766858 kubelet[2637]: I1108 00:28:58.766792 2637 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:28:58.800684 kubelet[2637]: I1108 00:28:58.800666 2637 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:58.803087 kubelet[2637]: E1108 00:28:58.803073 2637 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8b27c00582\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:58.803087 kubelet[2637]: I1108 00:28:58.803086 2637 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:58.803984 kubelet[2637]: E1108 00:28:58.803974 2637 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8b27c00582\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:58.803984 kubelet[2637]: I1108 00:28:58.803983 2637 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:58.804823 kubelet[2637]: E1108 00:28:58.804813 2637 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-8b27c00582\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:58.893680 kubelet[2637]: I1108 00:28:58.893665 2637 apiserver.go:52] "Watching apiserver" Nov 8 00:28:58.900502 kubelet[2637]: I1108 00:28:58.900489 2637 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:28:58.918222 kubelet[2637]: I1108 00:28:58.918182 2637 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:58.920341 kubelet[2637]: I1108 00:28:58.920307 2637 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:58.921587 kubelet[2637]: E1108 00:28:58.921534 2637 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-8b27c00582\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:58.922408 kubelet[2637]: I1108 00:28:58.922345 2637 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:58.923624 kubelet[2637]: E1108 00:28:58.923546 2637 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8b27c00582\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:58.925176 kubelet[2637]: E1108 00:28:58.925081 2637 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8b27c00582\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:59.924603 kubelet[2637]: I1108 00:28:59.924553 2637 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:59.925678 kubelet[2637]: I1108 00:28:59.924758 2637 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:28:59.930802 kubelet[2637]: I1108 00:28:59.930748 2637 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:28:59.931288 kubelet[2637]: I1108 00:28:59.931215 2637 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:29:01.088532 systemd[1]: Reloading requested from client PID 2967 ('systemctl') (unit session-11.scope)... Nov 8 00:29:01.088539 systemd[1]: Reloading... Nov 8 00:29:01.152216 zram_generator::config[3006]: No configuration found. Nov 8 00:29:01.217838 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:29:01.286624 systemd[1]: Reloading finished in 197 ms. Nov 8 00:29:01.313357 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:01.322548 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:29:01.322672 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:01.338104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:01.619132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:01.621729 (kubelet)[3070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:29:01.642251 kubelet[3070]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:29:01.642251 kubelet[3070]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:29:01.642481 kubelet[3070]: I1108 00:29:01.642293 3070 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:29:01.646647 kubelet[3070]: I1108 00:29:01.646601 3070 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:29:01.646647 kubelet[3070]: I1108 00:29:01.646615 3070 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:29:01.646647 kubelet[3070]: I1108 00:29:01.646631 3070 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:29:01.646647 kubelet[3070]: I1108 00:29:01.646638 3070 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:29:01.646767 kubelet[3070]: I1108 00:29:01.646761 3070 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:29:01.647515 kubelet[3070]: I1108 00:29:01.647478 3070 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:29:01.648730 kubelet[3070]: I1108 00:29:01.648690 3070 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:29:01.650047 kubelet[3070]: E1108 00:29:01.650030 3070 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:29:01.650099 kubelet[3070]: I1108 00:29:01.650062 3070 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:29:01.658094 kubelet[3070]: I1108 00:29:01.658084 3070 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:29:01.658221 kubelet[3070]: I1108 00:29:01.658207 3070 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:29:01.658322 kubelet[3070]: I1108 00:29:01.658222 3070 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-8b27c00582","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:29:01.658390 kubelet[3070]: I1108 00:29:01.658328 3070 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:29:01.658390 kubelet[3070]: I1108 00:29:01.658336 3070 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:29:01.658390 kubelet[3070]: I1108 00:29:01.658354 3070 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:29:01.658769 kubelet[3070]: I1108 00:29:01.658762 3070 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:29:01.658888 kubelet[3070]: I1108 00:29:01.658883 3070 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:29:01.658908 kubelet[3070]: I1108 00:29:01.658892 3070 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:29:01.658908 kubelet[3070]: I1108 00:29:01.658906 3070 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:29:01.658948 kubelet[3070]: I1108 00:29:01.658915 3070 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:29:01.659460 kubelet[3070]: I1108 00:29:01.659444 3070 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:29:01.659807 kubelet[3070]: I1108 00:29:01.659773 3070 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:29:01.659807 kubelet[3070]: I1108 00:29:01.659795 3070 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:29:01.661115 kubelet[3070]: I1108 00:29:01.661106 3070 server.go:1262] "Started kubelet" Nov 8 00:29:01.661222 kubelet[3070]: I1108 00:29:01.661176 3070 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:29:01.661274 kubelet[3070]: I1108 00:29:01.661195 3070 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:29:01.661312 kubelet[3070]: I1108 00:29:01.661265 3070 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:29:01.661466 kubelet[3070]: I1108 00:29:01.661454 3070 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:29:01.661658 kubelet[3070]: I1108 00:29:01.661648 3070 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:29:01.661701 kubelet[3070]: I1108 00:29:01.661667 3070 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:29:01.661738 kubelet[3070]: E1108 00:29:01.661706 3070 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-8b27c00582\" not found" Nov 8 00:29:01.661738 kubelet[3070]: I1108 00:29:01.661721 3070 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:29:01.661813 kubelet[3070]: I1108 00:29:01.661799 3070 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:29:01.661945 kubelet[3070]: I1108 00:29:01.661930 3070 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:29:01.663747 kubelet[3070]: I1108 00:29:01.663729 3070 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:29:01.664619 kubelet[3070]: I1108 00:29:01.664605 3070 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:29:01.664685 kubelet[3070]: I1108 00:29:01.664666 3070 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:29:01.665077 kubelet[3070]: E1108 00:29:01.665039 3070 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:29:01.665258 kubelet[3070]: I1108 00:29:01.665246 3070 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:29:01.670126 kubelet[3070]: I1108 00:29:01.670103 3070 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:29:01.670721 kubelet[3070]: I1108 00:29:01.670710 3070 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:29:01.670758 kubelet[3070]: I1108 00:29:01.670726 3070 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:29:01.670758 kubelet[3070]: I1108 00:29:01.670745 3070 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:29:01.670821 kubelet[3070]: E1108 00:29:01.670780 3070 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:29:01.681878 kubelet[3070]: I1108 00:29:01.681826 3070 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:29:01.681878 kubelet[3070]: I1108 00:29:01.681839 3070 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:29:01.681878 kubelet[3070]: I1108 00:29:01.681852 3070 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:29:01.681997 kubelet[3070]: I1108 00:29:01.681940 3070 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:29:01.681997 kubelet[3070]: I1108 00:29:01.681948 3070 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:29:01.681997 kubelet[3070]: I1108 00:29:01.681960 3070 policy_none.go:49] "None policy: Start" Nov 8 00:29:01.681997 kubelet[3070]: I1108 00:29:01.681966 3070 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:29:01.681997 kubelet[3070]: I1108 00:29:01.681973 3070 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:29:01.682093 kubelet[3070]: I1108 00:29:01.682036 3070 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 8 00:29:01.682093 kubelet[3070]: I1108 00:29:01.682042 3070 policy_none.go:47] "Start" Nov 8 00:29:01.684377 kubelet[3070]: E1108 00:29:01.684326 3070 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:29:01.684467 kubelet[3070]: I1108 00:29:01.684430 3070 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:29:01.684467 kubelet[3070]: I1108 00:29:01.684438 3070 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:29:01.684584 kubelet[3070]: I1108 00:29:01.684537 3070 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:29:01.684957 kubelet[3070]: E1108 00:29:01.684916 3070 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:29:01.772306 kubelet[3070]: I1108 00:29:01.772189 3070 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.772569 kubelet[3070]: I1108 00:29:01.772322 3070 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.772569 kubelet[3070]: I1108 00:29:01.772475 3070 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.780665 kubelet[3070]: I1108 00:29:01.780612 3070 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:29:01.780844 kubelet[3070]: I1108 00:29:01.780715 3070 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:29:01.780844 kubelet[3070]: E1108 00:29:01.780789 3070 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8b27c00582\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.781596 kubelet[3070]: I1108 00:29:01.781509 3070 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:29:01.781760 kubelet[3070]: E1108 00:29:01.781606 3070 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8b27c00582\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.789671 kubelet[3070]: I1108 00:29:01.789583 3070 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.797784 kubelet[3070]: I1108 00:29:01.797696 3070 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.798012 kubelet[3070]: I1108 00:29:01.797838 3070 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.963442 kubelet[3070]: I1108 00:29:01.963354 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4efc25b0429731247e3ac702423305af-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8b27c00582\" (UID: \"4efc25b0429731247e3ac702423305af\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.963768 kubelet[3070]: I1108 00:29:01.963468 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4efc25b0429731247e3ac702423305af-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-8b27c00582\" (UID: \"4efc25b0429731247e3ac702423305af\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.963768 kubelet[3070]: I1108 00:29:01.963556 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4efc25b0429731247e3ac702423305af-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-8b27c00582\" (UID: \"4efc25b0429731247e3ac702423305af\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.963768 kubelet[3070]: I1108 00:29:01.963646 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4efc25b0429731247e3ac702423305af-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-8b27c00582\" (UID: \"4efc25b0429731247e3ac702423305af\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.963768 kubelet[3070]: I1108 00:29:01.963733 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/26fe7a40135bf29e0e7fe5a4be904e69-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-8b27c00582\" (UID: \"26fe7a40135bf29e0e7fe5a4be904e69\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.964430 kubelet[3070]: I1108 00:29:01.963815 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1513553acaf2f466f8fb77f302b16960-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8b27c00582\" (UID: \"1513553acaf2f466f8fb77f302b16960\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.964430 kubelet[3070]: I1108 00:29:01.963908 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1513553acaf2f466f8fb77f302b16960-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-8b27c00582\" (UID: \"1513553acaf2f466f8fb77f302b16960\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.964430 kubelet[3070]: I1108 00:29:01.963995 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1513553acaf2f466f8fb77f302b16960-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-8b27c00582\" (UID: \"1513553acaf2f466f8fb77f302b16960\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:01.964430 kubelet[3070]: I1108 00:29:01.964079 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4efc25b0429731247e3ac702423305af-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-8b27c00582\" (UID: \"4efc25b0429731247e3ac702423305af\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:02.659999 kubelet[3070]: I1108 00:29:02.659931 3070 apiserver.go:52] "Watching apiserver" Nov 8 00:29:02.677259 kubelet[3070]: I1108 00:29:02.677244 3070 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:02.677526 kubelet[3070]: I1108 00:29:02.677458 3070 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:02.677622 kubelet[3070]: I1108 00:29:02.677612 3070 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:02.680154 kubelet[3070]: I1108 00:29:02.680035 3070 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:29:02.680266 kubelet[3070]: I1108 00:29:02.680246 3070 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:29:02.680301 kubelet[3070]: E1108 00:29:02.680280 3070 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-8b27c00582\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:02.680301 kubelet[3070]: I1108 00:29:02.680249 3070 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:29:02.680351 kubelet[3070]: E1108 00:29:02.680316 3070 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-8b27c00582\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:02.680686 kubelet[3070]: E1108 00:29:02.680280 3070 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-8b27c00582\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" Nov 8 00:29:02.690014 kubelet[3070]: I1108 00:29:02.689970 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-8b27c00582" podStartSLOduration=3.6899468029999998 podStartE2EDuration="3.689946803s" podCreationTimestamp="2025-11-08 00:28:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:02.689946789 +0000 UTC m=+1.066063619" watchObservedRunningTime="2025-11-08 00:29:02.689946803 +0000 UTC m=+1.066063637" Nov 8 00:29:02.697547 kubelet[3070]: I1108 00:29:02.697505 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-8b27c00582" podStartSLOduration=1.697480014 podStartE2EDuration="1.697480014s" podCreationTimestamp="2025-11-08 00:29:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:02.693951921 +0000 UTC m=+1.070068751" watchObservedRunningTime="2025-11-08 00:29:02.697480014 +0000 UTC m=+1.073596845" Nov 8 00:29:02.717982 kubelet[3070]: I1108 00:29:02.717896 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-8b27c00582" podStartSLOduration=3.717885397 podStartE2EDuration="3.717885397s" podCreationTimestamp="2025-11-08 00:28:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:02.697586351 +0000 UTC m=+1.073703185" watchObservedRunningTime="2025-11-08 00:29:02.717885397 +0000 UTC m=+1.094002225" Nov 8 00:29:02.762581 kubelet[3070]: I1108 00:29:02.762560 3070 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:29:07.177460 kubelet[3070]: I1108 00:29:07.177350 3070 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:29:07.178314 containerd[1809]: time="2025-11-08T00:29:07.178093167Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:29:07.178882 kubelet[3070]: I1108 00:29:07.178593 3070 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:29:08.313410 systemd[1]: Created slice kubepods-besteffort-pod3fd9272c_f463_4906_99c6_82395ca58f0b.slice - libcontainer container kubepods-besteffort-pod3fd9272c_f463_4906_99c6_82395ca58f0b.slice. Nov 8 00:29:08.408241 kubelet[3070]: I1108 00:29:08.408111 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fd9272c-f463-4906-99c6-82395ca58f0b-xtables-lock\") pod \"kube-proxy-tn9g7\" (UID: \"3fd9272c-f463-4906-99c6-82395ca58f0b\") " pod="kube-system/kube-proxy-tn9g7" Nov 8 00:29:08.409180 kubelet[3070]: I1108 00:29:08.408288 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fd9272c-f463-4906-99c6-82395ca58f0b-lib-modules\") pod \"kube-proxy-tn9g7\" (UID: \"3fd9272c-f463-4906-99c6-82395ca58f0b\") " pod="kube-system/kube-proxy-tn9g7" Nov 8 00:29:08.409180 kubelet[3070]: I1108 00:29:08.408397 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjplj\" (UniqueName: \"kubernetes.io/projected/3fd9272c-f463-4906-99c6-82395ca58f0b-kube-api-access-mjplj\") pod \"kube-proxy-tn9g7\" (UID: \"3fd9272c-f463-4906-99c6-82395ca58f0b\") " pod="kube-system/kube-proxy-tn9g7" Nov 8 00:29:08.409180 kubelet[3070]: I1108 00:29:08.408527 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3fd9272c-f463-4906-99c6-82395ca58f0b-kube-proxy\") pod \"kube-proxy-tn9g7\" (UID: \"3fd9272c-f463-4906-99c6-82395ca58f0b\") " pod="kube-system/kube-proxy-tn9g7" Nov 8 00:29:08.421776 systemd[1]: Created slice kubepods-besteffort-pod1ea4d80c_ff76_49f5_b4a5_88749e394b22.slice - libcontainer container kubepods-besteffort-pod1ea4d80c_ff76_49f5_b4a5_88749e394b22.slice. Nov 8 00:29:08.509303 kubelet[3070]: I1108 00:29:08.509185 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq78n\" (UniqueName: \"kubernetes.io/projected/1ea4d80c-ff76-49f5-b4a5-88749e394b22-kube-api-access-jq78n\") pod \"tigera-operator-65cdcdfd6d-mm68n\" (UID: \"1ea4d80c-ff76-49f5-b4a5-88749e394b22\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-mm68n" Nov 8 00:29:08.509538 kubelet[3070]: I1108 00:29:08.509382 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1ea4d80c-ff76-49f5-b4a5-88749e394b22-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-mm68n\" (UID: \"1ea4d80c-ff76-49f5-b4a5-88749e394b22\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-mm68n" Nov 8 00:29:08.627561 containerd[1809]: time="2025-11-08T00:29:08.627462225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tn9g7,Uid:3fd9272c-f463-4906-99c6-82395ca58f0b,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:08.637310 containerd[1809]: time="2025-11-08T00:29:08.637208236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:08.637310 containerd[1809]: time="2025-11-08T00:29:08.637268417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:08.637380 containerd[1809]: time="2025-11-08T00:29:08.637280991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:08.637686 containerd[1809]: time="2025-11-08T00:29:08.637622559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:08.663664 systemd[1]: Started cri-containerd-3b360e4618715bb2b1026350f227fbd972cbe08dd76661b8b5506288879f3186.scope - libcontainer container 3b360e4618715bb2b1026350f227fbd972cbe08dd76661b8b5506288879f3186. Nov 8 00:29:08.711080 containerd[1809]: time="2025-11-08T00:29:08.710976951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tn9g7,Uid:3fd9272c-f463-4906-99c6-82395ca58f0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b360e4618715bb2b1026350f227fbd972cbe08dd76661b8b5506288879f3186\"" Nov 8 00:29:08.722575 containerd[1809]: time="2025-11-08T00:29:08.722520133Z" level=info msg="CreateContainer within sandbox \"3b360e4618715bb2b1026350f227fbd972cbe08dd76661b8b5506288879f3186\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:29:08.724739 containerd[1809]: time="2025-11-08T00:29:08.724699675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-mm68n,Uid:1ea4d80c-ff76-49f5-b4a5-88749e394b22,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:29:08.729977 containerd[1809]: time="2025-11-08T00:29:08.729926396Z" level=info msg="CreateContainer within sandbox \"3b360e4618715bb2b1026350f227fbd972cbe08dd76661b8b5506288879f3186\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a3e21ae9ce103f813f32bce78e5decead0118b3d8783698715052f625c10b970\"" Nov 8 00:29:08.730205 containerd[1809]: time="2025-11-08T00:29:08.730186602Z" level=info msg="StartContainer for \"a3e21ae9ce103f813f32bce78e5decead0118b3d8783698715052f625c10b970\"" Nov 8 00:29:08.736074 containerd[1809]: time="2025-11-08T00:29:08.735999733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:08.736302 containerd[1809]: time="2025-11-08T00:29:08.736040800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:08.736333 containerd[1809]: time="2025-11-08T00:29:08.736277674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:08.736363 containerd[1809]: time="2025-11-08T00:29:08.736346756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:08.761310 systemd[1]: Started cri-containerd-a3e21ae9ce103f813f32bce78e5decead0118b3d8783698715052f625c10b970.scope - libcontainer container a3e21ae9ce103f813f32bce78e5decead0118b3d8783698715052f625c10b970. Nov 8 00:29:08.763151 systemd[1]: Started cri-containerd-dd1438df71227e3ef00986ef0d8b52bbe2a3fb34b0924db68fbd080f98bba943.scope - libcontainer container dd1438df71227e3ef00986ef0d8b52bbe2a3fb34b0924db68fbd080f98bba943. Nov 8 00:29:08.777876 containerd[1809]: time="2025-11-08T00:29:08.777825936Z" level=info msg="StartContainer for \"a3e21ae9ce103f813f32bce78e5decead0118b3d8783698715052f625c10b970\" returns successfully" Nov 8 00:29:08.794903 containerd[1809]: time="2025-11-08T00:29:08.794872776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-mm68n,Uid:1ea4d80c-ff76-49f5-b4a5-88749e394b22,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dd1438df71227e3ef00986ef0d8b52bbe2a3fb34b0924db68fbd080f98bba943\"" Nov 8 00:29:08.796022 containerd[1809]: time="2025-11-08T00:29:08.795998566Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:29:09.715750 kubelet[3070]: I1108 00:29:09.715604 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tn9g7" podStartSLOduration=1.71556592 podStartE2EDuration="1.71556592s" podCreationTimestamp="2025-11-08 00:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:09.715483199 +0000 UTC m=+8.091600104" watchObservedRunningTime="2025-11-08 00:29:09.71556592 +0000 UTC m=+8.091682805" Nov 8 00:29:10.180219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270372749.mount: Deactivated successfully. Nov 8 00:29:11.058359 containerd[1809]: time="2025-11-08T00:29:11.058308038Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:11.058561 containerd[1809]: time="2025-11-08T00:29:11.058442983Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:29:11.058815 containerd[1809]: time="2025-11-08T00:29:11.058802070Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:11.059982 containerd[1809]: time="2025-11-08T00:29:11.059944121Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:11.060444 containerd[1809]: time="2025-11-08T00:29:11.060405919Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.26437938s" Nov 8 00:29:11.060444 containerd[1809]: time="2025-11-08T00:29:11.060422372Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:29:11.062530 containerd[1809]: time="2025-11-08T00:29:11.062515125Z" level=info msg="CreateContainer within sandbox \"dd1438df71227e3ef00986ef0d8b52bbe2a3fb34b0924db68fbd080f98bba943\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:29:11.066538 containerd[1809]: time="2025-11-08T00:29:11.066491492Z" level=info msg="CreateContainer within sandbox \"dd1438df71227e3ef00986ef0d8b52bbe2a3fb34b0924db68fbd080f98bba943\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"25a704a840d09fee587d23e9c72044ebdc51ddb7f1cca53e8f594c913315f02f\"" Nov 8 00:29:11.066762 containerd[1809]: time="2025-11-08T00:29:11.066749628Z" level=info msg="StartContainer for \"25a704a840d09fee587d23e9c72044ebdc51ddb7f1cca53e8f594c913315f02f\"" Nov 8 00:29:11.093417 systemd[1]: Started cri-containerd-25a704a840d09fee587d23e9c72044ebdc51ddb7f1cca53e8f594c913315f02f.scope - libcontainer container 25a704a840d09fee587d23e9c72044ebdc51ddb7f1cca53e8f594c913315f02f. Nov 8 00:29:11.107201 containerd[1809]: time="2025-11-08T00:29:11.107173097Z" level=info msg="StartContainer for \"25a704a840d09fee587d23e9c72044ebdc51ddb7f1cca53e8f594c913315f02f\" returns successfully" Nov 8 00:29:11.723950 kubelet[3070]: I1108 00:29:11.723822 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-mm68n" podStartSLOduration=1.458110298 podStartE2EDuration="3.723785284s" podCreationTimestamp="2025-11-08 00:29:08 +0000 UTC" firstStartedPulling="2025-11-08 00:29:08.79572816 +0000 UTC m=+7.171844993" lastFinishedPulling="2025-11-08 00:29:11.061403145 +0000 UTC m=+9.437519979" observedRunningTime="2025-11-08 00:29:11.723703026 +0000 UTC m=+10.099819926" watchObservedRunningTime="2025-11-08 00:29:11.723785284 +0000 UTC m=+10.099902166" Nov 8 00:29:15.343572 update_engine[1804]: I20251108 00:29:15.343407 1804 update_attempter.cc:509] Updating boot flags... Nov 8 00:29:15.383149 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (3567) Nov 8 00:29:15.410150 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (3569) Nov 8 00:29:15.659037 sudo[2088]: pam_unix(sudo:session): session closed for user root Nov 8 00:29:15.659961 sshd[2085]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:15.661340 systemd[1]: sshd@8-139.178.94.39:22-139.178.68.195:49616.service: Deactivated successfully. Nov 8 00:29:15.662247 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:29:15.662337 systemd[1]: session-11.scope: Consumed 3.860s CPU time, 170.1M memory peak, 0B memory swap peak. Nov 8 00:29:15.662887 systemd-logind[1799]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:29:15.663494 systemd-logind[1799]: Removed session 11. Nov 8 00:29:19.765744 systemd[1]: Created slice kubepods-besteffort-pod136ea445_1242_4128_bb23_e2c6f4baa017.slice - libcontainer container kubepods-besteffort-pod136ea445_1242_4128_bb23_e2c6f4baa017.slice. Nov 8 00:29:19.789676 kubelet[3070]: I1108 00:29:19.789589 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/136ea445-1242-4128-bb23-e2c6f4baa017-tigera-ca-bundle\") pod \"calico-typha-648bdc8fff-wct8r\" (UID: \"136ea445-1242-4128-bb23-e2c6f4baa017\") " pod="calico-system/calico-typha-648bdc8fff-wct8r" Nov 8 00:29:19.790692 kubelet[3070]: I1108 00:29:19.789713 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/136ea445-1242-4128-bb23-e2c6f4baa017-typha-certs\") pod \"calico-typha-648bdc8fff-wct8r\" (UID: \"136ea445-1242-4128-bb23-e2c6f4baa017\") " pod="calico-system/calico-typha-648bdc8fff-wct8r" Nov 8 00:29:19.790692 kubelet[3070]: I1108 00:29:19.789772 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86rk7\" (UniqueName: \"kubernetes.io/projected/136ea445-1242-4128-bb23-e2c6f4baa017-kube-api-access-86rk7\") pod \"calico-typha-648bdc8fff-wct8r\" (UID: \"136ea445-1242-4128-bb23-e2c6f4baa017\") " pod="calico-system/calico-typha-648bdc8fff-wct8r" Nov 8 00:29:19.948956 systemd[1]: Created slice kubepods-besteffort-podfd1fd741_8fe2_4e94_ad2f_aa92f3ad651e.slice - libcontainer container kubepods-besteffort-podfd1fd741_8fe2_4e94_ad2f_aa92f3ad651e.slice. Nov 8 00:29:19.990855 kubelet[3070]: I1108 00:29:19.990752 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e-flexvol-driver-host\") pod \"calico-node-zrzlx\" (UID: \"fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e\") " pod="calico-system/calico-node-zrzlx" Nov 8 00:29:19.991221 kubelet[3070]: I1108 00:29:19.990933 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e-cni-log-dir\") pod \"calico-node-zrzlx\" (UID: \"fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e\") " pod="calico-system/calico-node-zrzlx" Nov 8 00:29:19.991221 kubelet[3070]: I1108 00:29:19.991025 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e-var-run-calico\") pod \"calico-node-zrzlx\" (UID: \"fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e\") " pod="calico-system/calico-node-zrzlx" Nov 8 00:29:19.991221 kubelet[3070]: I1108 00:29:19.991090 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e-cni-bin-dir\") pod \"calico-node-zrzlx\" (UID: \"fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e\") " pod="calico-system/calico-node-zrzlx" Nov 8 00:29:19.991221 kubelet[3070]: I1108 00:29:19.991171 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e-xtables-lock\") pod \"calico-node-zrzlx\" (UID: \"fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e\") " pod="calico-system/calico-node-zrzlx" Nov 8 00:29:19.991754 kubelet[3070]: I1108 00:29:19.991263 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e-policysync\") pod \"calico-node-zrzlx\" (UID: \"fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e\") " pod="calico-system/calico-node-zrzlx" Nov 8 00:29:19.991754 kubelet[3070]: I1108 00:29:19.991315 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e-tigera-ca-bundle\") pod \"calico-node-zrzlx\" (UID: \"fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e\") " pod="calico-system/calico-node-zrzlx" Nov 8 00:29:19.991754 kubelet[3070]: I1108 00:29:19.991424 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e-node-certs\") pod \"calico-node-zrzlx\" (UID: \"fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e\") " pod="calico-system/calico-node-zrzlx" Nov 8 00:29:19.991754 kubelet[3070]: I1108 00:29:19.991523 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e-cni-net-dir\") pod \"calico-node-zrzlx\" (UID: \"fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e\") " pod="calico-system/calico-node-zrzlx" Nov 8 00:29:19.991754 kubelet[3070]: I1108 00:29:19.991578 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e-lib-modules\") pod \"calico-node-zrzlx\" (UID: \"fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e\") " pod="calico-system/calico-node-zrzlx" Nov 8 00:29:19.992446 kubelet[3070]: I1108 00:29:19.991629 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn6kd\" (UniqueName: \"kubernetes.io/projected/fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e-kube-api-access-gn6kd\") pod \"calico-node-zrzlx\" (UID: \"fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e\") " pod="calico-system/calico-node-zrzlx" Nov 8 00:29:19.992446 kubelet[3070]: I1108 00:29:19.991681 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e-var-lib-calico\") pod \"calico-node-zrzlx\" (UID: \"fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e\") " pod="calico-system/calico-node-zrzlx" Nov 8 00:29:20.070016 containerd[1809]: time="2025-11-08T00:29:20.069932817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-648bdc8fff-wct8r,Uid:136ea445-1242-4128-bb23-e2c6f4baa017,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:20.079751 containerd[1809]: time="2025-11-08T00:29:20.079514540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:20.079751 containerd[1809]: time="2025-11-08T00:29:20.079739087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:20.079751 containerd[1809]: time="2025-11-08T00:29:20.079749486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:20.079889 containerd[1809]: time="2025-11-08T00:29:20.079796221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:20.093350 kubelet[3070]: E1108 00:29:20.093304 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.093350 kubelet[3070]: W1108 00:29:20.093318 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.093350 kubelet[3070]: E1108 00:29:20.093331 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.094114 kubelet[3070]: E1108 00:29:20.094102 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.094114 kubelet[3070]: W1108 00:29:20.094110 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.094184 kubelet[3070]: E1108 00:29:20.094118 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.097311 systemd[1]: Started cri-containerd-72870ede1a3aedb76ba0c8d5aec3d6bf76491acac41a2a7f2a43c90cd7ff40a8.scope - libcontainer container 72870ede1a3aedb76ba0c8d5aec3d6bf76491acac41a2a7f2a43c90cd7ff40a8. Nov 8 00:29:20.098486 kubelet[3070]: E1108 00:29:20.098475 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.098486 kubelet[3070]: W1108 00:29:20.098483 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.098582 kubelet[3070]: E1108 00:29:20.098492 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.120595 kubelet[3070]: E1108 00:29:20.120572 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:29:20.124226 containerd[1809]: time="2025-11-08T00:29:20.124200051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-648bdc8fff-wct8r,Uid:136ea445-1242-4128-bb23-e2c6f4baa017,Namespace:calico-system,Attempt:0,} returns sandbox id \"72870ede1a3aedb76ba0c8d5aec3d6bf76491acac41a2a7f2a43c90cd7ff40a8\"" Nov 8 00:29:20.125108 containerd[1809]: time="2025-11-08T00:29:20.125090257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:29:20.183044 kubelet[3070]: E1108 00:29:20.182957 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.183044 kubelet[3070]: W1108 00:29:20.182998 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.183044 kubelet[3070]: E1108 00:29:20.183036 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.183631 kubelet[3070]: E1108 00:29:20.183555 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.183631 kubelet[3070]: W1108 00:29:20.183589 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.183631 kubelet[3070]: E1108 00:29:20.183620 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.184153 kubelet[3070]: E1108 00:29:20.184104 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.184153 kubelet[3070]: W1108 00:29:20.184130 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.184397 kubelet[3070]: E1108 00:29:20.184184 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.184748 kubelet[3070]: E1108 00:29:20.184674 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.184748 kubelet[3070]: W1108 00:29:20.184701 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.184748 kubelet[3070]: E1108 00:29:20.184728 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.185257 kubelet[3070]: E1108 00:29:20.185180 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.185257 kubelet[3070]: W1108 00:29:20.185207 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.185257 kubelet[3070]: E1108 00:29:20.185232 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.185624 kubelet[3070]: E1108 00:29:20.185588 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.185624 kubelet[3070]: W1108 00:29:20.185610 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.185802 kubelet[3070]: E1108 00:29:20.185631 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.186177 kubelet[3070]: E1108 00:29:20.186091 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.186177 kubelet[3070]: W1108 00:29:20.186117 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.186177 kubelet[3070]: E1108 00:29:20.186172 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.186696 kubelet[3070]: E1108 00:29:20.186628 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.186696 kubelet[3070]: W1108 00:29:20.186652 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.186696 kubelet[3070]: E1108 00:29:20.186676 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.187227 kubelet[3070]: E1108 00:29:20.187168 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.187227 kubelet[3070]: W1108 00:29:20.187200 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.187227 kubelet[3070]: E1108 00:29:20.187224 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.187733 kubelet[3070]: E1108 00:29:20.187670 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.187733 kubelet[3070]: W1108 00:29:20.187694 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.187733 kubelet[3070]: E1108 00:29:20.187718 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.188217 kubelet[3070]: E1108 00:29:20.188165 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.188217 kubelet[3070]: W1108 00:29:20.188189 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.188217 kubelet[3070]: E1108 00:29:20.188211 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.188727 kubelet[3070]: E1108 00:29:20.188662 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.188727 kubelet[3070]: W1108 00:29:20.188691 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.188919 kubelet[3070]: E1108 00:29:20.188731 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.189281 kubelet[3070]: E1108 00:29:20.189232 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.189281 kubelet[3070]: W1108 00:29:20.189255 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.189281 kubelet[3070]: E1108 00:29:20.189276 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.189788 kubelet[3070]: E1108 00:29:20.189734 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.189788 kubelet[3070]: W1108 00:29:20.189756 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.189954 kubelet[3070]: E1108 00:29:20.189781 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.190401 kubelet[3070]: E1108 00:29:20.190317 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.190401 kubelet[3070]: W1108 00:29:20.190343 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.190401 kubelet[3070]: E1108 00:29:20.190366 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.190894 kubelet[3070]: E1108 00:29:20.190809 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.190894 kubelet[3070]: W1108 00:29:20.190833 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.190894 kubelet[3070]: E1108 00:29:20.190859 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.191390 kubelet[3070]: E1108 00:29:20.191328 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.191390 kubelet[3070]: W1108 00:29:20.191350 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.191390 kubelet[3070]: E1108 00:29:20.191371 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.191823 kubelet[3070]: E1108 00:29:20.191779 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.191823 kubelet[3070]: W1108 00:29:20.191801 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.191823 kubelet[3070]: E1108 00:29:20.191822 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.192310 kubelet[3070]: E1108 00:29:20.192238 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.192310 kubelet[3070]: W1108 00:29:20.192260 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.192310 kubelet[3070]: E1108 00:29:20.192283 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.192701 kubelet[3070]: E1108 00:29:20.192676 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.192701 kubelet[3070]: W1108 00:29:20.192698 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.192888 kubelet[3070]: E1108 00:29:20.192721 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.193396 kubelet[3070]: E1108 00:29:20.193364 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.193396 kubelet[3070]: W1108 00:29:20.193390 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.193627 kubelet[3070]: E1108 00:29:20.193416 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.193627 kubelet[3070]: I1108 00:29:20.193465 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2db14322-3de3-476c-bc43-59b2bd1acea4-socket-dir\") pod \"csi-node-driver-njlbj\" (UID: \"2db14322-3de3-476c-bc43-59b2bd1acea4\") " pod="calico-system/csi-node-driver-njlbj" Nov 8 00:29:20.194009 kubelet[3070]: E1108 00:29:20.193960 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.194009 kubelet[3070]: W1108 00:29:20.193988 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.194009 kubelet[3070]: E1108 00:29:20.194012 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.194434 kubelet[3070]: I1108 00:29:20.194060 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2db14322-3de3-476c-bc43-59b2bd1acea4-kubelet-dir\") pod \"csi-node-driver-njlbj\" (UID: \"2db14322-3de3-476c-bc43-59b2bd1acea4\") " pod="calico-system/csi-node-driver-njlbj" Nov 8 00:29:20.194733 kubelet[3070]: E1108 00:29:20.194672 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.194733 kubelet[3070]: W1108 00:29:20.194713 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.194929 kubelet[3070]: E1108 00:29:20.194748 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.195324 kubelet[3070]: E1108 00:29:20.195258 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.195324 kubelet[3070]: W1108 00:29:20.195284 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.195324 kubelet[3070]: E1108 00:29:20.195311 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.195899 kubelet[3070]: E1108 00:29:20.195833 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.195899 kubelet[3070]: W1108 00:29:20.195858 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.195899 kubelet[3070]: E1108 00:29:20.195882 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.196424 kubelet[3070]: E1108 00:29:20.196369 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.196424 kubelet[3070]: W1108 00:29:20.196396 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.196424 kubelet[3070]: E1108 00:29:20.196421 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.196929 kubelet[3070]: E1108 00:29:20.196867 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.196929 kubelet[3070]: W1108 00:29:20.196891 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.196929 kubelet[3070]: E1108 00:29:20.196916 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.197207 kubelet[3070]: I1108 00:29:20.196962 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2db14322-3de3-476c-bc43-59b2bd1acea4-registration-dir\") pod \"csi-node-driver-njlbj\" (UID: \"2db14322-3de3-476c-bc43-59b2bd1acea4\") " pod="calico-system/csi-node-driver-njlbj" Nov 8 00:29:20.197486 kubelet[3070]: E1108 00:29:20.197420 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.197486 kubelet[3070]: W1108 00:29:20.197447 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.197486 kubelet[3070]: E1108 00:29:20.197472 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.197758 kubelet[3070]: I1108 00:29:20.197514 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2db14322-3de3-476c-bc43-59b2bd1acea4-varrun\") pod \"csi-node-driver-njlbj\" (UID: \"2db14322-3de3-476c-bc43-59b2bd1acea4\") " pod="calico-system/csi-node-driver-njlbj" Nov 8 00:29:20.198077 kubelet[3070]: E1108 00:29:20.198005 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.198077 kubelet[3070]: W1108 00:29:20.198043 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.198357 kubelet[3070]: E1108 00:29:20.198078 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.198623 kubelet[3070]: E1108 00:29:20.198552 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.198623 kubelet[3070]: W1108 00:29:20.198579 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.198623 kubelet[3070]: E1108 00:29:20.198605 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.199103 kubelet[3070]: E1108 00:29:20.199050 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.199103 kubelet[3070]: W1108 00:29:20.199076 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.199103 kubelet[3070]: E1108 00:29:20.199101 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.199465 kubelet[3070]: I1108 00:29:20.199221 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdhsm\" (UniqueName: \"kubernetes.io/projected/2db14322-3de3-476c-bc43-59b2bd1acea4-kube-api-access-sdhsm\") pod \"csi-node-driver-njlbj\" (UID: \"2db14322-3de3-476c-bc43-59b2bd1acea4\") " pod="calico-system/csi-node-driver-njlbj" Nov 8 00:29:20.199757 kubelet[3070]: E1108 00:29:20.199695 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.199757 kubelet[3070]: W1108 00:29:20.199732 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.199952 kubelet[3070]: E1108 00:29:20.199763 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.200266 kubelet[3070]: E1108 00:29:20.200219 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.200266 kubelet[3070]: W1108 00:29:20.200244 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.200266 kubelet[3070]: E1108 00:29:20.200269 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.200846 kubelet[3070]: E1108 00:29:20.200787 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.200846 kubelet[3070]: W1108 00:29:20.200822 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.201044 kubelet[3070]: E1108 00:29:20.200853 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.201430 kubelet[3070]: E1108 00:29:20.201377 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.201430 kubelet[3070]: W1108 00:29:20.201402 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.201430 kubelet[3070]: E1108 00:29:20.201427 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.256674 containerd[1809]: time="2025-11-08T00:29:20.256635842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zrzlx,Uid:fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:20.267249 containerd[1809]: time="2025-11-08T00:29:20.267175060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:20.267454 containerd[1809]: time="2025-11-08T00:29:20.267380486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:20.267454 containerd[1809]: time="2025-11-08T00:29:20.267390639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:20.267454 containerd[1809]: time="2025-11-08T00:29:20.267429973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:20.285584 systemd[1]: Started cri-containerd-7bfc9e6db9a36ba455dbfda8340d67efa1595e232f1ae456d4863add39b6383a.scope - libcontainer container 7bfc9e6db9a36ba455dbfda8340d67efa1595e232f1ae456d4863add39b6383a. Nov 8 00:29:20.299639 containerd[1809]: time="2025-11-08T00:29:20.299610790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zrzlx,Uid:fd1fd741-8fe2-4e94-ad2f-aa92f3ad651e,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bfc9e6db9a36ba455dbfda8340d67efa1595e232f1ae456d4863add39b6383a\"" Nov 8 00:29:20.300021 kubelet[3070]: E1108 00:29:20.300008 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.300063 kubelet[3070]: W1108 00:29:20.300023 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.300063 kubelet[3070]: E1108 00:29:20.300038 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.300190 kubelet[3070]: E1108 00:29:20.300183 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.300228 kubelet[3070]: W1108 00:29:20.300190 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.300228 kubelet[3070]: E1108 00:29:20.300200 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.300352 kubelet[3070]: E1108 00:29:20.300343 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.300385 kubelet[3070]: W1108 00:29:20.300353 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.300385 kubelet[3070]: E1108 00:29:20.300362 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.300468 kubelet[3070]: E1108 00:29:20.300461 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.300494 kubelet[3070]: W1108 00:29:20.300468 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.300494 kubelet[3070]: E1108 00:29:20.300476 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.300596 kubelet[3070]: E1108 00:29:20.300588 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.300596 kubelet[3070]: W1108 00:29:20.300594 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.300676 kubelet[3070]: E1108 00:29:20.300601 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.300736 kubelet[3070]: E1108 00:29:20.300726 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.300781 kubelet[3070]: W1108 00:29:20.300736 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.300781 kubelet[3070]: E1108 00:29:20.300747 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.300860 kubelet[3070]: E1108 00:29:20.300853 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.300860 kubelet[3070]: W1108 00:29:20.300858 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.300932 kubelet[3070]: E1108 00:29:20.300864 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.300969 kubelet[3070]: E1108 00:29:20.300953 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.300969 kubelet[3070]: W1108 00:29:20.300958 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.300969 kubelet[3070]: E1108 00:29:20.300966 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.301080 kubelet[3070]: E1108 00:29:20.301054 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.301080 kubelet[3070]: W1108 00:29:20.301059 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.301080 kubelet[3070]: E1108 00:29:20.301065 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.301240 kubelet[3070]: E1108 00:29:20.301228 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.301240 kubelet[3070]: W1108 00:29:20.301238 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.301321 kubelet[3070]: E1108 00:29:20.301246 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.301365 kubelet[3070]: E1108 00:29:20.301360 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.301403 kubelet[3070]: W1108 00:29:20.301368 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.301403 kubelet[3070]: E1108 00:29:20.301377 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.301512 kubelet[3070]: E1108 00:29:20.301501 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.301512 kubelet[3070]: W1108 00:29:20.301510 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.301593 kubelet[3070]: E1108 00:29:20.301518 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.301634 kubelet[3070]: E1108 00:29:20.301626 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.301634 kubelet[3070]: W1108 00:29:20.301633 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.301684 kubelet[3070]: E1108 00:29:20.301639 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.301758 kubelet[3070]: E1108 00:29:20.301751 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.301781 kubelet[3070]: W1108 00:29:20.301758 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.301781 kubelet[3070]: E1108 00:29:20.301764 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.301892 kubelet[3070]: E1108 00:29:20.301885 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.301919 kubelet[3070]: W1108 00:29:20.301892 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.301919 kubelet[3070]: E1108 00:29:20.301899 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.302040 kubelet[3070]: E1108 00:29:20.302034 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.302064 kubelet[3070]: W1108 00:29:20.302041 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.302064 kubelet[3070]: E1108 00:29:20.302047 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.302178 kubelet[3070]: E1108 00:29:20.302170 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.302178 kubelet[3070]: W1108 00:29:20.302177 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.302225 kubelet[3070]: E1108 00:29:20.302183 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.302328 kubelet[3070]: E1108 00:29:20.302320 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.302358 kubelet[3070]: W1108 00:29:20.302328 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.302358 kubelet[3070]: E1108 00:29:20.302337 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.302449 kubelet[3070]: E1108 00:29:20.302442 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.302482 kubelet[3070]: W1108 00:29:20.302449 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.302482 kubelet[3070]: E1108 00:29:20.302455 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.302601 kubelet[3070]: E1108 00:29:20.302595 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.302601 kubelet[3070]: W1108 00:29:20.302601 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.302649 kubelet[3070]: E1108 00:29:20.302607 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.302746 kubelet[3070]: E1108 00:29:20.302740 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.302771 kubelet[3070]: W1108 00:29:20.302746 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.302771 kubelet[3070]: E1108 00:29:20.302753 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.302962 kubelet[3070]: E1108 00:29:20.302954 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.302962 kubelet[3070]: W1108 00:29:20.302962 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.303017 kubelet[3070]: E1108 00:29:20.302969 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.303088 kubelet[3070]: E1108 00:29:20.303078 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.303088 kubelet[3070]: W1108 00:29:20.303085 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.303174 kubelet[3070]: E1108 00:29:20.303092 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.303203 kubelet[3070]: E1108 00:29:20.303192 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.303203 kubelet[3070]: W1108 00:29:20.303198 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.303255 kubelet[3070]: E1108 00:29:20.303204 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.303421 kubelet[3070]: E1108 00:29:20.303414 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.303421 kubelet[3070]: W1108 00:29:20.303421 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.303473 kubelet[3070]: E1108 00:29:20.303429 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:20.307977 kubelet[3070]: E1108 00:29:20.307956 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:20.307977 kubelet[3070]: W1108 00:29:20.307966 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:20.307977 kubelet[3070]: E1108 00:29:20.307974 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:21.497752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435960944.mount: Deactivated successfully. Nov 8 00:29:21.671364 kubelet[3070]: E1108 00:29:21.671313 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:29:21.810850 containerd[1809]: time="2025-11-08T00:29:21.810762909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:21.811048 containerd[1809]: time="2025-11-08T00:29:21.810966717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:29:21.811293 containerd[1809]: time="2025-11-08T00:29:21.811247761Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:21.812687 containerd[1809]: time="2025-11-08T00:29:21.812643311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:21.812947 containerd[1809]: time="2025-11-08T00:29:21.812903626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.687794751s" Nov 8 00:29:21.812947 containerd[1809]: time="2025-11-08T00:29:21.812920776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:29:21.813457 containerd[1809]: time="2025-11-08T00:29:21.813417003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:29:21.817004 containerd[1809]: time="2025-11-08T00:29:21.816984700Z" level=info msg="CreateContainer within sandbox \"72870ede1a3aedb76ba0c8d5aec3d6bf76491acac41a2a7f2a43c90cd7ff40a8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:29:21.820964 containerd[1809]: time="2025-11-08T00:29:21.820946884Z" level=info msg="CreateContainer within sandbox \"72870ede1a3aedb76ba0c8d5aec3d6bf76491acac41a2a7f2a43c90cd7ff40a8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3abef2e9b0688b17f902a354dc8ba7fe6027bc7ad8899e158a76aab9f9370700\"" Nov 8 00:29:21.821286 containerd[1809]: time="2025-11-08T00:29:21.821268567Z" level=info msg="StartContainer for \"3abef2e9b0688b17f902a354dc8ba7fe6027bc7ad8899e158a76aab9f9370700\"" Nov 8 00:29:21.848379 systemd[1]: Started cri-containerd-3abef2e9b0688b17f902a354dc8ba7fe6027bc7ad8899e158a76aab9f9370700.scope - libcontainer container 3abef2e9b0688b17f902a354dc8ba7fe6027bc7ad8899e158a76aab9f9370700. Nov 8 00:29:21.878489 containerd[1809]: time="2025-11-08T00:29:21.878435411Z" level=info msg="StartContainer for \"3abef2e9b0688b17f902a354dc8ba7fe6027bc7ad8899e158a76aab9f9370700\" returns successfully" Nov 8 00:29:22.755983 kubelet[3070]: I1108 00:29:22.755841 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-648bdc8fff-wct8r" podStartSLOduration=2.067356313 podStartE2EDuration="3.755798413s" podCreationTimestamp="2025-11-08 00:29:19 +0000 UTC" firstStartedPulling="2025-11-08 00:29:20.124897422 +0000 UTC m=+18.501014253" lastFinishedPulling="2025-11-08 00:29:21.813339518 +0000 UTC m=+20.189456353" observedRunningTime="2025-11-08 00:29:22.755216066 +0000 UTC m=+21.131332966" watchObservedRunningTime="2025-11-08 00:29:22.755798413 +0000 UTC m=+21.131915291" Nov 8 00:29:22.810612 kubelet[3070]: E1108 00:29:22.810517 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.810612 kubelet[3070]: W1108 00:29:22.810560 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.810612 kubelet[3070]: E1108 00:29:22.810603 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.811373 kubelet[3070]: E1108 00:29:22.811285 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.811373 kubelet[3070]: W1108 00:29:22.811322 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.811373 kubelet[3070]: E1108 00:29:22.811359 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.812022 kubelet[3070]: E1108 00:29:22.811936 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.812022 kubelet[3070]: W1108 00:29:22.811975 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.812022 kubelet[3070]: E1108 00:29:22.812010 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.812755 kubelet[3070]: E1108 00:29:22.812670 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.812755 kubelet[3070]: W1108 00:29:22.812708 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.812755 kubelet[3070]: E1108 00:29:22.812743 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.813469 kubelet[3070]: E1108 00:29:22.813380 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.813469 kubelet[3070]: W1108 00:29:22.813418 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.813469 kubelet[3070]: E1108 00:29:22.813452 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.814100 kubelet[3070]: E1108 00:29:22.814003 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.814100 kubelet[3070]: W1108 00:29:22.814041 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.814100 kubelet[3070]: E1108 00:29:22.814076 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.814777 kubelet[3070]: E1108 00:29:22.814680 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.814777 kubelet[3070]: W1108 00:29:22.814723 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.814777 kubelet[3070]: E1108 00:29:22.814758 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.815408 kubelet[3070]: E1108 00:29:22.815348 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.815408 kubelet[3070]: W1108 00:29:22.815379 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.815618 kubelet[3070]: E1108 00:29:22.815413 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.816023 kubelet[3070]: E1108 00:29:22.815956 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.816023 kubelet[3070]: W1108 00:29:22.815984 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.816023 kubelet[3070]: E1108 00:29:22.816014 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.816576 kubelet[3070]: E1108 00:29:22.816518 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.816576 kubelet[3070]: W1108 00:29:22.816544 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.816576 kubelet[3070]: E1108 00:29:22.816572 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.817109 kubelet[3070]: E1108 00:29:22.817059 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.817109 kubelet[3070]: W1108 00:29:22.817085 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.817109 kubelet[3070]: E1108 00:29:22.817112 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.817711 kubelet[3070]: E1108 00:29:22.817676 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.817711 kubelet[3070]: W1108 00:29:22.817703 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.817980 kubelet[3070]: E1108 00:29:22.817731 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.818339 kubelet[3070]: E1108 00:29:22.818267 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.818339 kubelet[3070]: W1108 00:29:22.818297 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.818339 kubelet[3070]: E1108 00:29:22.818325 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.818933 kubelet[3070]: E1108 00:29:22.818851 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.818933 kubelet[3070]: W1108 00:29:22.818884 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.818933 kubelet[3070]: E1108 00:29:22.818919 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.819720 kubelet[3070]: E1108 00:29:22.819633 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.819720 kubelet[3070]: W1108 00:29:22.819664 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.819720 kubelet[3070]: E1108 00:29:22.819695 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.822057 kubelet[3070]: E1108 00:29:22.821976 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.822057 kubelet[3070]: W1108 00:29:22.822011 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.822057 kubelet[3070]: E1108 00:29:22.822044 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.822770 kubelet[3070]: E1108 00:29:22.822688 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.822770 kubelet[3070]: W1108 00:29:22.822725 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.822770 kubelet[3070]: E1108 00:29:22.822761 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.823518 kubelet[3070]: E1108 00:29:22.823435 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.823518 kubelet[3070]: W1108 00:29:22.823473 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.823518 kubelet[3070]: E1108 00:29:22.823507 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.824257 kubelet[3070]: E1108 00:29:22.824213 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.824257 kubelet[3070]: W1108 00:29:22.824242 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.824560 kubelet[3070]: E1108 00:29:22.824272 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.824963 kubelet[3070]: E1108 00:29:22.824877 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.824963 kubelet[3070]: W1108 00:29:22.824915 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.824963 kubelet[3070]: E1108 00:29:22.824949 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.825621 kubelet[3070]: E1108 00:29:22.825542 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.825621 kubelet[3070]: W1108 00:29:22.825573 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.825621 kubelet[3070]: E1108 00:29:22.825602 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.826235 kubelet[3070]: E1108 00:29:22.826164 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.826235 kubelet[3070]: W1108 00:29:22.826195 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.826235 kubelet[3070]: E1108 00:29:22.826224 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.826822 kubelet[3070]: E1108 00:29:22.826761 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.826822 kubelet[3070]: W1108 00:29:22.826789 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.826822 kubelet[3070]: E1108 00:29:22.826816 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.827312 kubelet[3070]: E1108 00:29:22.827262 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.827312 kubelet[3070]: W1108 00:29:22.827287 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.827312 kubelet[3070]: E1108 00:29:22.827311 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.827869 kubelet[3070]: E1108 00:29:22.827822 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.827869 kubelet[3070]: W1108 00:29:22.827850 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.828107 kubelet[3070]: E1108 00:29:22.827874 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.828438 kubelet[3070]: E1108 00:29:22.828388 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.828438 kubelet[3070]: W1108 00:29:22.828416 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.828675 kubelet[3070]: E1108 00:29:22.828442 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.828983 kubelet[3070]: E1108 00:29:22.828935 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.828983 kubelet[3070]: W1108 00:29:22.828964 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.829253 kubelet[3070]: E1108 00:29:22.828989 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.829571 kubelet[3070]: E1108 00:29:22.829522 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.829571 kubelet[3070]: W1108 00:29:22.829550 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.829793 kubelet[3070]: E1108 00:29:22.829577 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.830278 kubelet[3070]: E1108 00:29:22.830219 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.830278 kubelet[3070]: W1108 00:29:22.830258 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.830512 kubelet[3070]: E1108 00:29:22.830287 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.831010 kubelet[3070]: E1108 00:29:22.830957 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.831010 kubelet[3070]: W1108 00:29:22.830997 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.831219 kubelet[3070]: E1108 00:29:22.831040 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.831684 kubelet[3070]: E1108 00:29:22.831631 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.831684 kubelet[3070]: W1108 00:29:22.831659 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.831921 kubelet[3070]: E1108 00:29:22.831687 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.832363 kubelet[3070]: E1108 00:29:22.832283 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.832363 kubelet[3070]: W1108 00:29:22.832311 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.832363 kubelet[3070]: E1108 00:29:22.832337 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:22.832929 kubelet[3070]: E1108 00:29:22.832884 3070 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:29:22.832929 kubelet[3070]: W1108 00:29:22.832912 3070 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:29:22.833295 kubelet[3070]: E1108 00:29:22.832939 3070 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:29:23.186200 containerd[1809]: time="2025-11-08T00:29:23.186084319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:23.186459 containerd[1809]: time="2025-11-08T00:29:23.186308907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:29:23.186673 containerd[1809]: time="2025-11-08T00:29:23.186621137Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:23.188222 containerd[1809]: time="2025-11-08T00:29:23.188182582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:23.188563 containerd[1809]: time="2025-11-08T00:29:23.188518495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.375087544s" Nov 8 00:29:23.188563 containerd[1809]: time="2025-11-08T00:29:23.188535949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:29:23.190122 containerd[1809]: time="2025-11-08T00:29:23.190087170Z" level=info msg="CreateContainer within sandbox \"7bfc9e6db9a36ba455dbfda8340d67efa1595e232f1ae456d4863add39b6383a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:29:23.195348 containerd[1809]: time="2025-11-08T00:29:23.195301210Z" level=info msg="CreateContainer within sandbox \"7bfc9e6db9a36ba455dbfda8340d67efa1595e232f1ae456d4863add39b6383a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4d37fe738914b22023a0b5dc895fbdbc8993a02ef8f27a9c276d9fb1561fe85b\"" Nov 8 00:29:23.195590 containerd[1809]: time="2025-11-08T00:29:23.195549664Z" level=info msg="StartContainer for \"4d37fe738914b22023a0b5dc895fbdbc8993a02ef8f27a9c276d9fb1561fe85b\"" Nov 8 00:29:23.227501 systemd[1]: Started cri-containerd-4d37fe738914b22023a0b5dc895fbdbc8993a02ef8f27a9c276d9fb1561fe85b.scope - libcontainer container 4d37fe738914b22023a0b5dc895fbdbc8993a02ef8f27a9c276d9fb1561fe85b. Nov 8 00:29:23.284587 containerd[1809]: time="2025-11-08T00:29:23.284523877Z" level=info msg="StartContainer for \"4d37fe738914b22023a0b5dc895fbdbc8993a02ef8f27a9c276d9fb1561fe85b\" returns successfully" Nov 8 00:29:23.294477 systemd[1]: cri-containerd-4d37fe738914b22023a0b5dc895fbdbc8993a02ef8f27a9c276d9fb1561fe85b.scope: Deactivated successfully. Nov 8 00:29:23.316651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d37fe738914b22023a0b5dc895fbdbc8993a02ef8f27a9c276d9fb1561fe85b-rootfs.mount: Deactivated successfully. Nov 8 00:29:23.672416 kubelet[3070]: E1108 00:29:23.672300 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:29:23.740334 kubelet[3070]: I1108 00:29:23.740282 3070 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:29:23.771696 containerd[1809]: time="2025-11-08T00:29:23.771662457Z" level=info msg="shim disconnected" id=4d37fe738914b22023a0b5dc895fbdbc8993a02ef8f27a9c276d9fb1561fe85b namespace=k8s.io Nov 8 00:29:23.771696 containerd[1809]: time="2025-11-08T00:29:23.771693037Z" level=warning msg="cleaning up after shim disconnected" id=4d37fe738914b22023a0b5dc895fbdbc8993a02ef8f27a9c276d9fb1561fe85b namespace=k8s.io Nov 8 00:29:23.771696 containerd[1809]: time="2025-11-08T00:29:23.771698290Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:29:24.749396 containerd[1809]: time="2025-11-08T00:29:24.749322013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:29:25.671559 kubelet[3070]: E1108 00:29:25.671434 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:29:27.025647 containerd[1809]: time="2025-11-08T00:29:27.025590909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:27.025871 containerd[1809]: time="2025-11-08T00:29:27.025826061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:29:27.026191 containerd[1809]: time="2025-11-08T00:29:27.026163988Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:27.027158 containerd[1809]: time="2025-11-08T00:29:27.027106358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:27.027590 containerd[1809]: time="2025-11-08T00:29:27.027557998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.278170772s" Nov 8 00:29:27.027590 containerd[1809]: time="2025-11-08T00:29:27.027574803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:29:27.029255 containerd[1809]: time="2025-11-08T00:29:27.029234581Z" level=info msg="CreateContainer within sandbox \"7bfc9e6db9a36ba455dbfda8340d67efa1595e232f1ae456d4863add39b6383a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:29:27.034909 containerd[1809]: time="2025-11-08T00:29:27.034892042Z" level=info msg="CreateContainer within sandbox \"7bfc9e6db9a36ba455dbfda8340d67efa1595e232f1ae456d4863add39b6383a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e29d767039bb892a4a81de6c374c4aaf846930f071be88976dfc3754f5fc749b\"" Nov 8 00:29:27.035178 containerd[1809]: time="2025-11-08T00:29:27.035164483Z" level=info msg="StartContainer for \"e29d767039bb892a4a81de6c374c4aaf846930f071be88976dfc3754f5fc749b\"" Nov 8 00:29:27.069654 systemd[1]: Started cri-containerd-e29d767039bb892a4a81de6c374c4aaf846930f071be88976dfc3754f5fc749b.scope - libcontainer container e29d767039bb892a4a81de6c374c4aaf846930f071be88976dfc3754f5fc749b. Nov 8 00:29:27.128197 containerd[1809]: time="2025-11-08T00:29:27.128160106Z" level=info msg="StartContainer for \"e29d767039bb892a4a81de6c374c4aaf846930f071be88976dfc3754f5fc749b\" returns successfully" Nov 8 00:29:27.672020 kubelet[3070]: E1108 00:29:27.671988 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:29:27.762711 containerd[1809]: time="2025-11-08T00:29:27.762685236Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:29:27.763684 systemd[1]: cri-containerd-e29d767039bb892a4a81de6c374c4aaf846930f071be88976dfc3754f5fc749b.scope: Deactivated successfully. Nov 8 00:29:27.773166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e29d767039bb892a4a81de6c374c4aaf846930f071be88976dfc3754f5fc749b-rootfs.mount: Deactivated successfully. Nov 8 00:29:27.822775 kubelet[3070]: I1108 00:29:27.822713 3070 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 8 00:29:27.945615 systemd[1]: Created slice kubepods-burstable-pod24a3af85_008b_4d6d_85c6_e1f4e122242a.slice - libcontainer container kubepods-burstable-pod24a3af85_008b_4d6d_85c6_e1f4e122242a.slice. Nov 8 00:29:27.960363 kubelet[3070]: I1108 00:29:27.960249 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24a3af85-008b-4d6d-85c6-e1f4e122242a-config-volume\") pod \"coredns-66bc5c9577-nl8rk\" (UID: \"24a3af85-008b-4d6d-85c6-e1f4e122242a\") " pod="kube-system/coredns-66bc5c9577-nl8rk" Nov 8 00:29:27.960363 kubelet[3070]: I1108 00:29:27.960347 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6vh9\" (UniqueName: \"kubernetes.io/projected/24a3af85-008b-4d6d-85c6-e1f4e122242a-kube-api-access-m6vh9\") pod \"coredns-66bc5c9577-nl8rk\" (UID: \"24a3af85-008b-4d6d-85c6-e1f4e122242a\") " pod="kube-system/coredns-66bc5c9577-nl8rk" Nov 8 00:29:28.034404 systemd[1]: Created slice kubepods-besteffort-pod7c46dfff_678e_44bc_9089_cef43e8fa0d3.slice - libcontainer container kubepods-besteffort-pod7c46dfff_678e_44bc_9089_cef43e8fa0d3.slice. Nov 8 00:29:28.061179 kubelet[3070]: I1108 00:29:28.061067 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c46dfff-678e-44bc-9089-cef43e8fa0d3-tigera-ca-bundle\") pod \"calico-kube-controllers-7c8d496dff-jlg6z\" (UID: \"7c46dfff-678e-44bc-9089-cef43e8fa0d3\") " pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" Nov 8 00:29:28.093701 kubelet[3070]: I1108 00:29:28.061212 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krfv2\" (UniqueName: \"kubernetes.io/projected/7c46dfff-678e-44bc-9089-cef43e8fa0d3-kube-api-access-krfv2\") pod \"calico-kube-controllers-7c8d496dff-jlg6z\" (UID: \"7c46dfff-678e-44bc-9089-cef43e8fa0d3\") " pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" Nov 8 00:29:28.129755 systemd[1]: Created slice kubepods-burstable-pod610570e2_7f08_4a1f_b974_d26709be3c92.slice - libcontainer container kubepods-burstable-pod610570e2_7f08_4a1f_b974_d26709be3c92.slice. Nov 8 00:29:28.161885 kubelet[3070]: I1108 00:29:28.161847 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/610570e2-7f08-4a1f-b974-d26709be3c92-config-volume\") pod \"coredns-66bc5c9577-7x794\" (UID: \"610570e2-7f08-4a1f-b974-d26709be3c92\") " pod="kube-system/coredns-66bc5c9577-7x794" Nov 8 00:29:28.162022 kubelet[3070]: I1108 00:29:28.161895 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl65j\" (UniqueName: \"kubernetes.io/projected/610570e2-7f08-4a1f-b974-d26709be3c92-kube-api-access-nl65j\") pod \"coredns-66bc5c9577-7x794\" (UID: \"610570e2-7f08-4a1f-b974-d26709be3c92\") " pod="kube-system/coredns-66bc5c9577-7x794" Nov 8 00:29:28.172570 containerd[1809]: time="2025-11-08T00:29:28.172535080Z" level=info msg="shim disconnected" id=e29d767039bb892a4a81de6c374c4aaf846930f071be88976dfc3754f5fc749b namespace=k8s.io Nov 8 00:29:28.172570 containerd[1809]: time="2025-11-08T00:29:28.172569629Z" level=warning msg="cleaning up after shim disconnected" id=e29d767039bb892a4a81de6c374c4aaf846930f071be88976dfc3754f5fc749b namespace=k8s.io Nov 8 00:29:28.172787 containerd[1809]: time="2025-11-08T00:29:28.172575067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:29:28.175144 systemd[1]: Created slice kubepods-besteffort-pod5ec5b66b_733a_489d_9c96_c95ce9255384.slice - libcontainer container kubepods-besteffort-pod5ec5b66b_733a_489d_9c96_c95ce9255384.slice. Nov 8 00:29:28.177416 systemd[1]: Created slice kubepods-besteffort-poda4457e65_0840_44a3_9b91_05cc2050df9f.slice - libcontainer container kubepods-besteffort-poda4457e65_0840_44a3_9b91_05cc2050df9f.slice. Nov 8 00:29:28.180153 systemd[1]: Created slice kubepods-besteffort-podd510fe8b_db97_40db_ab28_3634909f38a6.slice - libcontainer container kubepods-besteffort-podd510fe8b_db97_40db_ab28_3634909f38a6.slice. Nov 8 00:29:28.182281 systemd[1]: Created slice kubepods-besteffort-podf2aca3e9_badc_4243_9e39_2f08b59499bf.slice - libcontainer container kubepods-besteffort-podf2aca3e9_badc_4243_9e39_2f08b59499bf.slice. Nov 8 00:29:28.254356 containerd[1809]: time="2025-11-08T00:29:28.254266016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nl8rk,Uid:24a3af85-008b-4d6d-85c6-e1f4e122242a,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:28.262647 kubelet[3070]: I1108 00:29:28.262627 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp7z9\" (UniqueName: \"kubernetes.io/projected/a4457e65-0840-44a3-9b91-05cc2050df9f-kube-api-access-kp7z9\") pod \"calico-apiserver-6694c6b5c5-rk6lq\" (UID: \"a4457e65-0840-44a3-9b91-05cc2050df9f\") " pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" Nov 8 00:29:28.262720 kubelet[3070]: I1108 00:29:28.262651 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2aca3e9-badc-4243-9e39-2f08b59499bf-whisker-ca-bundle\") pod \"whisker-779f6bb48c-26p75\" (UID: \"f2aca3e9-badc-4243-9e39-2f08b59499bf\") " pod="calico-system/whisker-779f6bb48c-26p75" Nov 8 00:29:28.262720 kubelet[3070]: I1108 00:29:28.262663 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a4457e65-0840-44a3-9b91-05cc2050df9f-calico-apiserver-certs\") pod \"calico-apiserver-6694c6b5c5-rk6lq\" (UID: \"a4457e65-0840-44a3-9b91-05cc2050df9f\") " pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" Nov 8 00:29:28.262720 kubelet[3070]: I1108 00:29:28.262674 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6kn4\" (UniqueName: \"kubernetes.io/projected/f2aca3e9-badc-4243-9e39-2f08b59499bf-kube-api-access-f6kn4\") pod \"whisker-779f6bb48c-26p75\" (UID: \"f2aca3e9-badc-4243-9e39-2f08b59499bf\") " pod="calico-system/whisker-779f6bb48c-26p75" Nov 8 00:29:28.262720 kubelet[3070]: I1108 00:29:28.262684 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d510fe8b-db97-40db-ab28-3634909f38a6-config\") pod \"goldmane-7c778bb748-t42z5\" (UID: \"d510fe8b-db97-40db-ab28-3634909f38a6\") " pod="calico-system/goldmane-7c778bb748-t42z5" Nov 8 00:29:28.262720 kubelet[3070]: I1108 00:29:28.262694 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d510fe8b-db97-40db-ab28-3634909f38a6-goldmane-key-pair\") pod \"goldmane-7c778bb748-t42z5\" (UID: \"d510fe8b-db97-40db-ab28-3634909f38a6\") " pod="calico-system/goldmane-7c778bb748-t42z5" Nov 8 00:29:28.262818 kubelet[3070]: I1108 00:29:28.262710 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94df8\" (UniqueName: \"kubernetes.io/projected/d510fe8b-db97-40db-ab28-3634909f38a6-kube-api-access-94df8\") pod \"goldmane-7c778bb748-t42z5\" (UID: \"d510fe8b-db97-40db-ab28-3634909f38a6\") " pod="calico-system/goldmane-7c778bb748-t42z5" Nov 8 00:29:28.262818 kubelet[3070]: I1108 00:29:28.262757 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8jxk\" (UniqueName: \"kubernetes.io/projected/5ec5b66b-733a-489d-9c96-c95ce9255384-kube-api-access-c8jxk\") pod \"calico-apiserver-6694c6b5c5-xb2cq\" (UID: \"5ec5b66b-733a-489d-9c96-c95ce9255384\") " pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" Nov 8 00:29:28.262818 kubelet[3070]: I1108 00:29:28.262774 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2aca3e9-badc-4243-9e39-2f08b59499bf-whisker-backend-key-pair\") pod \"whisker-779f6bb48c-26p75\" (UID: \"f2aca3e9-badc-4243-9e39-2f08b59499bf\") " pod="calico-system/whisker-779f6bb48c-26p75" Nov 8 00:29:28.262818 kubelet[3070]: I1108 00:29:28.262783 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d510fe8b-db97-40db-ab28-3634909f38a6-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-t42z5\" (UID: \"d510fe8b-db97-40db-ab28-3634909f38a6\") " pod="calico-system/goldmane-7c778bb748-t42z5" Nov 8 00:29:28.262818 kubelet[3070]: I1108 00:29:28.262805 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ec5b66b-733a-489d-9c96-c95ce9255384-calico-apiserver-certs\") pod \"calico-apiserver-6694c6b5c5-xb2cq\" (UID: \"5ec5b66b-733a-489d-9c96-c95ce9255384\") " pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" Nov 8 00:29:28.283096 containerd[1809]: time="2025-11-08T00:29:28.283068046Z" level=error msg="Failed to destroy network for sandbox \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.283302 containerd[1809]: time="2025-11-08T00:29:28.283261175Z" level=error msg="encountered an error cleaning up failed sandbox \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.283302 containerd[1809]: time="2025-11-08T00:29:28.283289715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nl8rk,Uid:24a3af85-008b-4d6d-85c6-e1f4e122242a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.283461 kubelet[3070]: E1108 00:29:28.283441 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.283509 kubelet[3070]: E1108 00:29:28.283482 3070 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nl8rk" Nov 8 00:29:28.283509 kubelet[3070]: E1108 00:29:28.283495 3070 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nl8rk" Nov 8 00:29:28.283557 kubelet[3070]: E1108 00:29:28.283533 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nl8rk_kube-system(24a3af85-008b-4d6d-85c6-e1f4e122242a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nl8rk_kube-system(24a3af85-008b-4d6d-85c6-e1f4e122242a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nl8rk" podUID="24a3af85-008b-4d6d-85c6-e1f4e122242a" Nov 8 00:29:28.345622 containerd[1809]: time="2025-11-08T00:29:28.345586070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8d496dff-jlg6z,Uid:7c46dfff-678e-44bc-9089-cef43e8fa0d3,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:28.372269 containerd[1809]: time="2025-11-08T00:29:28.372243177Z" level=error msg="Failed to destroy network for sandbox \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.372438 containerd[1809]: time="2025-11-08T00:29:28.372425666Z" level=error msg="encountered an error cleaning up failed sandbox \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.372465 containerd[1809]: time="2025-11-08T00:29:28.372455353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8d496dff-jlg6z,Uid:7c46dfff-678e-44bc-9089-cef43e8fa0d3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.372598 kubelet[3070]: E1108 00:29:28.372581 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.372629 kubelet[3070]: E1108 00:29:28.372610 3070 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" Nov 8 00:29:28.372629 kubelet[3070]: E1108 00:29:28.372622 3070 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" Nov 8 00:29:28.372670 kubelet[3070]: E1108 00:29:28.372655 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c8d496dff-jlg6z_calico-system(7c46dfff-678e-44bc-9089-cef43e8fa0d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c8d496dff-jlg6z_calico-system(7c46dfff-678e-44bc-9089-cef43e8fa0d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:29:28.438479 containerd[1809]: time="2025-11-08T00:29:28.438408773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7x794,Uid:610570e2-7f08-4a1f-b974-d26709be3c92,Namespace:kube-system,Attempt:0,}" Nov 8 00:29:28.464472 containerd[1809]: time="2025-11-08T00:29:28.464416026Z" level=error msg="Failed to destroy network for sandbox \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.464658 containerd[1809]: time="2025-11-08T00:29:28.464616743Z" level=error msg="encountered an error cleaning up failed sandbox \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.464658 containerd[1809]: time="2025-11-08T00:29:28.464643092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7x794,Uid:610570e2-7f08-4a1f-b974-d26709be3c92,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.464829 kubelet[3070]: E1108 00:29:28.464782 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.464829 kubelet[3070]: E1108 00:29:28.464813 3070 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7x794" Nov 8 00:29:28.464829 kubelet[3070]: E1108 00:29:28.464825 3070 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7x794" Nov 8 00:29:28.464909 kubelet[3070]: E1108 00:29:28.464856 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7x794_kube-system(610570e2-7f08-4a1f-b974-d26709be3c92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7x794_kube-system(610570e2-7f08-4a1f-b974-d26709be3c92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7x794" podUID="610570e2-7f08-4a1f-b974-d26709be3c92" Nov 8 00:29:28.477458 containerd[1809]: time="2025-11-08T00:29:28.477414983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c6b5c5-xb2cq,Uid:5ec5b66b-733a-489d-9c96-c95ce9255384,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:29:28.480213 containerd[1809]: time="2025-11-08T00:29:28.480199224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c6b5c5-rk6lq,Uid:a4457e65-0840-44a3-9b91-05cc2050df9f,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:29:28.483272 containerd[1809]: time="2025-11-08T00:29:28.483253845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t42z5,Uid:d510fe8b-db97-40db-ab28-3634909f38a6,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:28.483888 containerd[1809]: time="2025-11-08T00:29:28.483875775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-779f6bb48c-26p75,Uid:f2aca3e9-badc-4243-9e39-2f08b59499bf,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:28.505873 containerd[1809]: time="2025-11-08T00:29:28.505788212Z" level=error msg="Failed to destroy network for sandbox \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.506062 containerd[1809]: time="2025-11-08T00:29:28.506008044Z" level=error msg="encountered an error cleaning up failed sandbox \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.506113 containerd[1809]: time="2025-11-08T00:29:28.506076907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c6b5c5-xb2cq,Uid:5ec5b66b-733a-489d-9c96-c95ce9255384,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.506264 kubelet[3070]: E1108 00:29:28.506236 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.506306 kubelet[3070]: E1108 00:29:28.506283 3070 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" Nov 8 00:29:28.506340 kubelet[3070]: E1108 00:29:28.506301 3070 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" Nov 8 00:29:28.506386 kubelet[3070]: E1108 00:29:28.506346 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6694c6b5c5-xb2cq_calico-apiserver(5ec5b66b-733a-489d-9c96-c95ce9255384)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6694c6b5c5-xb2cq_calico-apiserver(5ec5b66b-733a-489d-9c96-c95ce9255384)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:29:28.508756 containerd[1809]: time="2025-11-08T00:29:28.508723073Z" level=error msg="Failed to destroy network for sandbox \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.508947 containerd[1809]: time="2025-11-08T00:29:28.508932930Z" level=error msg="encountered an error cleaning up failed sandbox \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.509097 containerd[1809]: time="2025-11-08T00:29:28.508966252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c6b5c5-rk6lq,Uid:a4457e65-0840-44a3-9b91-05cc2050df9f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.509155 kubelet[3070]: E1108 00:29:28.509108 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.509189 kubelet[3070]: E1108 00:29:28.509178 3070 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" Nov 8 00:29:28.509209 kubelet[3070]: E1108 00:29:28.509192 3070 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" Nov 8 00:29:28.509251 kubelet[3070]: E1108 00:29:28.509234 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6694c6b5c5-rk6lq_calico-apiserver(a4457e65-0840-44a3-9b91-05cc2050df9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6694c6b5c5-rk6lq_calico-apiserver(a4457e65-0840-44a3-9b91-05cc2050df9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:29:28.513539 containerd[1809]: time="2025-11-08T00:29:28.513518743Z" level=error msg="Failed to destroy network for sandbox \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.513722 containerd[1809]: time="2025-11-08T00:29:28.513685127Z" level=error msg="encountered an error cleaning up failed sandbox \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.513722 containerd[1809]: time="2025-11-08T00:29:28.513716209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t42z5,Uid:d510fe8b-db97-40db-ab28-3634909f38a6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.513856 kubelet[3070]: E1108 00:29:28.513823 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.513898 kubelet[3070]: E1108 00:29:28.513862 3070 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-t42z5" Nov 8 00:29:28.513898 kubelet[3070]: E1108 00:29:28.513874 3070 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-t42z5" Nov 8 00:29:28.513968 kubelet[3070]: E1108 00:29:28.513911 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-t42z5_calico-system(d510fe8b-db97-40db-ab28-3634909f38a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-t42z5_calico-system(d510fe8b-db97-40db-ab28-3634909f38a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:29:28.514027 containerd[1809]: time="2025-11-08T00:29:28.513893564Z" level=error msg="Failed to destroy network for sandbox \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.514092 containerd[1809]: time="2025-11-08T00:29:28.514078207Z" level=error msg="encountered an error cleaning up failed sandbox \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.514127 containerd[1809]: time="2025-11-08T00:29:28.514105059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-779f6bb48c-26p75,Uid:f2aca3e9-badc-4243-9e39-2f08b59499bf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.514252 kubelet[3070]: E1108 00:29:28.514225 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.514252 kubelet[3070]: E1108 00:29:28.514243 3070 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-779f6bb48c-26p75" Nov 8 00:29:28.514296 kubelet[3070]: E1108 00:29:28.514253 3070 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-779f6bb48c-26p75" Nov 8 00:29:28.514296 kubelet[3070]: E1108 00:29:28.514273 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-779f6bb48c-26p75_calico-system(f2aca3e9-badc-4243-9e39-2f08b59499bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-779f6bb48c-26p75_calico-system(f2aca3e9-badc-4243-9e39-2f08b59499bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-779f6bb48c-26p75" podUID="f2aca3e9-badc-4243-9e39-2f08b59499bf" Nov 8 00:29:28.757631 kubelet[3070]: I1108 00:29:28.757375 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Nov 8 00:29:28.758851 containerd[1809]: time="2025-11-08T00:29:28.758769640Z" level=info msg="StopPodSandbox for \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\"" Nov 8 00:29:28.759405 containerd[1809]: time="2025-11-08T00:29:28.759310023Z" level=info msg="Ensure that sandbox 956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04 in task-service has been cleanup successfully" Nov 8 00:29:28.760304 kubelet[3070]: I1108 00:29:28.760240 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Nov 8 00:29:28.761295 containerd[1809]: time="2025-11-08T00:29:28.761229732Z" level=info msg="StopPodSandbox for \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\"" Nov 8 00:29:28.761702 containerd[1809]: time="2025-11-08T00:29:28.761639249Z" level=info msg="Ensure that sandbox ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0 in task-service has been cleanup successfully" Nov 8 00:29:28.762944 kubelet[3070]: I1108 00:29:28.762890 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Nov 8 00:29:28.764362 containerd[1809]: time="2025-11-08T00:29:28.764274620Z" level=info msg="StopPodSandbox for \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\"" Nov 8 00:29:28.764765 containerd[1809]: time="2025-11-08T00:29:28.764708722Z" level=info msg="Ensure that sandbox 452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b in task-service has been cleanup successfully" Nov 8 00:29:28.765495 kubelet[3070]: I1108 00:29:28.765439 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Nov 8 00:29:28.766865 containerd[1809]: time="2025-11-08T00:29:28.766793456Z" level=info msg="StopPodSandbox for \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\"" Nov 8 00:29:28.767460 containerd[1809]: time="2025-11-08T00:29:28.767384975Z" level=info msg="Ensure that sandbox cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f in task-service has been cleanup successfully" Nov 8 00:29:28.768601 kubelet[3070]: I1108 00:29:28.768539 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Nov 8 00:29:28.770150 containerd[1809]: time="2025-11-08T00:29:28.770061953Z" level=info msg="StopPodSandbox for \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\"" Nov 8 00:29:28.770574 containerd[1809]: time="2025-11-08T00:29:28.770509480Z" level=info msg="Ensure that sandbox 225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9 in task-service has been cleanup successfully" Nov 8 00:29:28.777026 containerd[1809]: time="2025-11-08T00:29:28.776948910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:29:28.777406 kubelet[3070]: I1108 00:29:28.776983 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Nov 8 00:29:28.777999 containerd[1809]: time="2025-11-08T00:29:28.777982983Z" level=info msg="StopPodSandbox for \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\"" Nov 8 00:29:28.778130 containerd[1809]: time="2025-11-08T00:29:28.778120314Z" level=info msg="Ensure that sandbox 2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f in task-service has been cleanup successfully" Nov 8 00:29:28.778276 kubelet[3070]: I1108 00:29:28.778263 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Nov 8 00:29:28.778605 containerd[1809]: time="2025-11-08T00:29:28.778587266Z" level=info msg="StopPodSandbox for \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\"" Nov 8 00:29:28.779034 containerd[1809]: time="2025-11-08T00:29:28.778936513Z" level=info msg="Ensure that sandbox 5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8 in task-service has been cleanup successfully" Nov 8 00:29:28.790611 containerd[1809]: time="2025-11-08T00:29:28.790576821Z" level=error msg="StopPodSandbox for \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\" failed" error="failed to destroy network for sandbox \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.791109 kubelet[3070]: E1108 00:29:28.790758 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Nov 8 00:29:28.791109 kubelet[3070]: E1108 00:29:28.790797 3070 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04"} Nov 8 00:29:28.791109 kubelet[3070]: E1108 00:29:28.790835 3070 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d510fe8b-db97-40db-ab28-3634909f38a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:28.791109 kubelet[3070]: E1108 00:29:28.790853 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d510fe8b-db97-40db-ab28-3634909f38a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:29:28.791619 containerd[1809]: time="2025-11-08T00:29:28.791601168Z" level=error msg="StopPodSandbox for \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\" failed" error="failed to destroy network for sandbox \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.791730 kubelet[3070]: E1108 00:29:28.791714 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Nov 8 00:29:28.791777 kubelet[3070]: E1108 00:29:28.791732 3070 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f"} Nov 8 00:29:28.791777 kubelet[3070]: E1108 00:29:28.791745 3070 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c46dfff-678e-44bc-9089-cef43e8fa0d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:28.791777 kubelet[3070]: E1108 00:29:28.791767 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c46dfff-678e-44bc-9089-cef43e8fa0d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:29:28.791906 containerd[1809]: time="2025-11-08T00:29:28.791864820Z" level=error msg="StopPodSandbox for \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\" failed" error="failed to destroy network for sandbox \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.791943 kubelet[3070]: E1108 00:29:28.791934 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Nov 8 00:29:28.791978 kubelet[3070]: E1108 00:29:28.791945 3070 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0"} Nov 8 00:29:28.791978 kubelet[3070]: E1108 00:29:28.791956 3070 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ec5b66b-733a-489d-9c96-c95ce9255384\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:28.791978 kubelet[3070]: E1108 00:29:28.791966 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ec5b66b-733a-489d-9c96-c95ce9255384\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:29:28.792260 containerd[1809]: time="2025-11-08T00:29:28.792239244Z" level=error msg="StopPodSandbox for \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\" failed" error="failed to destroy network for sandbox \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.792349 kubelet[3070]: E1108 00:29:28.792335 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Nov 8 00:29:28.792373 kubelet[3070]: E1108 00:29:28.792350 3070 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b"} Nov 8 00:29:28.792373 kubelet[3070]: E1108 00:29:28.792361 3070 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"610570e2-7f08-4a1f-b974-d26709be3c92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:28.792417 kubelet[3070]: E1108 00:29:28.792371 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"610570e2-7f08-4a1f-b974-d26709be3c92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7x794" podUID="610570e2-7f08-4a1f-b974-d26709be3c92" Nov 8 00:29:28.792606 containerd[1809]: time="2025-11-08T00:29:28.792586212Z" level=error msg="StopPodSandbox for \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\" failed" error="failed to destroy network for sandbox \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.792686 kubelet[3070]: E1108 00:29:28.792675 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Nov 8 00:29:28.792708 kubelet[3070]: E1108 00:29:28.792688 3070 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9"} Nov 8 00:29:28.792708 kubelet[3070]: E1108 00:29:28.792702 3070 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24a3af85-008b-4d6d-85c6-e1f4e122242a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:28.792754 kubelet[3070]: E1108 00:29:28.792713 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24a3af85-008b-4d6d-85c6-e1f4e122242a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nl8rk" podUID="24a3af85-008b-4d6d-85c6-e1f4e122242a" Nov 8 00:29:28.793564 containerd[1809]: time="2025-11-08T00:29:28.793549569Z" level=error msg="StopPodSandbox for \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\" failed" error="failed to destroy network for sandbox \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.793747 kubelet[3070]: E1108 00:29:28.793735 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Nov 8 00:29:28.793771 kubelet[3070]: E1108 00:29:28.793750 3070 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f"} Nov 8 00:29:28.793771 kubelet[3070]: E1108 00:29:28.793762 3070 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2aca3e9-badc-4243-9e39-2f08b59499bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:28.793813 kubelet[3070]: E1108 00:29:28.793773 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2aca3e9-badc-4243-9e39-2f08b59499bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-779f6bb48c-26p75" podUID="f2aca3e9-badc-4243-9e39-2f08b59499bf" Nov 8 00:29:28.795887 containerd[1809]: time="2025-11-08T00:29:28.795843182Z" level=error msg="StopPodSandbox for \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\" failed" error="failed to destroy network for sandbox \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:28.795923 kubelet[3070]: E1108 00:29:28.795903 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Nov 8 00:29:28.795923 kubelet[3070]: E1108 00:29:28.795918 3070 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8"} Nov 8 00:29:28.795964 kubelet[3070]: E1108 00:29:28.795937 3070 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4457e65-0840-44a3-9b91-05cc2050df9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:28.795964 kubelet[3070]: E1108 00:29:28.795950 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4457e65-0840-44a3-9b91-05cc2050df9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:29:29.110017 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9-shm.mount: Deactivated successfully. Nov 8 00:29:29.686726 systemd[1]: Created slice kubepods-besteffort-pod2db14322_3de3_476c_bc43_59b2bd1acea4.slice - libcontainer container kubepods-besteffort-pod2db14322_3de3_476c_bc43_59b2bd1acea4.slice. Nov 8 00:29:29.693243 containerd[1809]: time="2025-11-08T00:29:29.693180577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njlbj,Uid:2db14322-3de3-476c-bc43-59b2bd1acea4,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:29.721533 containerd[1809]: time="2025-11-08T00:29:29.721506467Z" level=error msg="Failed to destroy network for sandbox \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:29.721716 containerd[1809]: time="2025-11-08T00:29:29.721700752Z" level=error msg="encountered an error cleaning up failed sandbox \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:29.721755 containerd[1809]: time="2025-11-08T00:29:29.721735353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njlbj,Uid:2db14322-3de3-476c-bc43-59b2bd1acea4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:29.721948 kubelet[3070]: E1108 00:29:29.721902 3070 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:29.721948 kubelet[3070]: E1108 00:29:29.721935 3070 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-njlbj" Nov 8 00:29:29.722016 kubelet[3070]: E1108 00:29:29.721947 3070 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-njlbj" Nov 8 00:29:29.722016 kubelet[3070]: E1108 00:29:29.721993 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-njlbj_calico-system(2db14322-3de3-476c-bc43-59b2bd1acea4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-njlbj_calico-system(2db14322-3de3-476c-bc43-59b2bd1acea4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:29:29.723111 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18-shm.mount: Deactivated successfully. Nov 8 00:29:29.783877 kubelet[3070]: I1108 00:29:29.783768 3070 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Nov 8 00:29:29.785077 containerd[1809]: time="2025-11-08T00:29:29.784963420Z" level=info msg="StopPodSandbox for \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\"" Nov 8 00:29:29.785499 containerd[1809]: time="2025-11-08T00:29:29.785409408Z" level=info msg="Ensure that sandbox e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18 in task-service has been cleanup successfully" Nov 8 00:29:29.802711 containerd[1809]: time="2025-11-08T00:29:29.802652818Z" level=error msg="StopPodSandbox for \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\" failed" error="failed to destroy network for sandbox \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:29:29.802919 kubelet[3070]: E1108 00:29:29.802856 3070 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Nov 8 00:29:29.802919 kubelet[3070]: E1108 00:29:29.802910 3070 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18"} Nov 8 00:29:29.802993 kubelet[3070]: E1108 00:29:29.802941 3070 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2db14322-3de3-476c-bc43-59b2bd1acea4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:29:29.802993 kubelet[3070]: E1108 00:29:29.802971 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2db14322-3de3-476c-bc43-59b2bd1acea4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:29:32.050650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3917636753.mount: Deactivated successfully. Nov 8 00:29:32.067665 containerd[1809]: time="2025-11-08T00:29:32.067620768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:32.067886 containerd[1809]: time="2025-11-08T00:29:32.067847392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:29:32.068182 containerd[1809]: time="2025-11-08T00:29:32.068163246Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:32.069073 containerd[1809]: time="2025-11-08T00:29:32.069032525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:29:32.069416 containerd[1809]: time="2025-11-08T00:29:32.069380668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.292360974s" Nov 8 00:29:32.069416 containerd[1809]: time="2025-11-08T00:29:32.069395402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:29:32.090204 containerd[1809]: time="2025-11-08T00:29:32.090123126Z" level=info msg="CreateContainer within sandbox \"7bfc9e6db9a36ba455dbfda8340d67efa1595e232f1ae456d4863add39b6383a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:29:32.097580 containerd[1809]: time="2025-11-08T00:29:32.097530489Z" level=info msg="CreateContainer within sandbox \"7bfc9e6db9a36ba455dbfda8340d67efa1595e232f1ae456d4863add39b6383a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3e55a7b259e244495cfce277d39fe43758447e80b224a9153c982284888a000c\"" Nov 8 00:29:32.097811 containerd[1809]: time="2025-11-08T00:29:32.097797004Z" level=info msg="StartContainer for \"3e55a7b259e244495cfce277d39fe43758447e80b224a9153c982284888a000c\"" Nov 8 00:29:32.122629 systemd[1]: Started cri-containerd-3e55a7b259e244495cfce277d39fe43758447e80b224a9153c982284888a000c.scope - libcontainer container 3e55a7b259e244495cfce277d39fe43758447e80b224a9153c982284888a000c. Nov 8 00:29:32.180502 containerd[1809]: time="2025-11-08T00:29:32.180460732Z" level=info msg="StartContainer for \"3e55a7b259e244495cfce277d39fe43758447e80b224a9153c982284888a000c\" returns successfully" Nov 8 00:29:32.260869 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:29:32.260924 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:29:32.298973 containerd[1809]: time="2025-11-08T00:29:32.298943636Z" level=info msg="StopPodSandbox for \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\"" Nov 8 00:29:32.343907 containerd[1809]: 2025-11-08 00:29:32.327 [INFO][4657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Nov 8 00:29:32.343907 containerd[1809]: 2025-11-08 00:29:32.327 [INFO][4657] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" iface="eth0" netns="/var/run/netns/cni-5785c896-0ba9-c173-5657-f4ba479a4ae0" Nov 8 00:29:32.343907 containerd[1809]: 2025-11-08 00:29:32.327 [INFO][4657] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" iface="eth0" netns="/var/run/netns/cni-5785c896-0ba9-c173-5657-f4ba479a4ae0" Nov 8 00:29:32.343907 containerd[1809]: 2025-11-08 00:29:32.327 [INFO][4657] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" iface="eth0" netns="/var/run/netns/cni-5785c896-0ba9-c173-5657-f4ba479a4ae0" Nov 8 00:29:32.343907 containerd[1809]: 2025-11-08 00:29:32.327 [INFO][4657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Nov 8 00:29:32.343907 containerd[1809]: 2025-11-08 00:29:32.327 [INFO][4657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Nov 8 00:29:32.343907 containerd[1809]: 2025-11-08 00:29:32.337 [INFO][4680] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" HandleID="k8s-pod-network.2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Workload="ci--4081.3.6--n--8b27c00582-k8s-whisker--779f6bb48c--26p75-eth0" Nov 8 00:29:32.343907 containerd[1809]: 2025-11-08 00:29:32.337 [INFO][4680] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:32.343907 containerd[1809]: 2025-11-08 00:29:32.337 [INFO][4680] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:32.343907 containerd[1809]: 2025-11-08 00:29:32.340 [WARNING][4680] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" HandleID="k8s-pod-network.2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Workload="ci--4081.3.6--n--8b27c00582-k8s-whisker--779f6bb48c--26p75-eth0" Nov 8 00:29:32.343907 containerd[1809]: 2025-11-08 00:29:32.340 [INFO][4680] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" HandleID="k8s-pod-network.2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Workload="ci--4081.3.6--n--8b27c00582-k8s-whisker--779f6bb48c--26p75-eth0" Nov 8 00:29:32.343907 containerd[1809]: 2025-11-08 00:29:32.341 [INFO][4680] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:32.343907 containerd[1809]: 2025-11-08 00:29:32.342 [INFO][4657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Nov 8 00:29:32.344221 containerd[1809]: time="2025-11-08T00:29:32.343957146Z" level=info msg="TearDown network for sandbox \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\" successfully" Nov 8 00:29:32.344221 containerd[1809]: time="2025-11-08T00:29:32.343976563Z" level=info msg="StopPodSandbox for \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\" returns successfully" Nov 8 00:29:32.397009 kubelet[3070]: I1108 00:29:32.396941 3070 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2aca3e9-badc-4243-9e39-2f08b59499bf-whisker-ca-bundle\") pod \"f2aca3e9-badc-4243-9e39-2f08b59499bf\" (UID: \"f2aca3e9-badc-4243-9e39-2f08b59499bf\") " Nov 8 00:29:32.397922 kubelet[3070]: I1108 00:29:32.397054 3070 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6kn4\" (UniqueName: \"kubernetes.io/projected/f2aca3e9-badc-4243-9e39-2f08b59499bf-kube-api-access-f6kn4\") pod \"f2aca3e9-badc-4243-9e39-2f08b59499bf\" (UID: \"f2aca3e9-badc-4243-9e39-2f08b59499bf\") " Nov 8 00:29:32.397922 kubelet[3070]: I1108 00:29:32.397125 3070 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2aca3e9-badc-4243-9e39-2f08b59499bf-whisker-backend-key-pair\") pod \"f2aca3e9-badc-4243-9e39-2f08b59499bf\" (UID: \"f2aca3e9-badc-4243-9e39-2f08b59499bf\") " Nov 8 00:29:32.397922 kubelet[3070]: I1108 00:29:32.397767 3070 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2aca3e9-badc-4243-9e39-2f08b59499bf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f2aca3e9-badc-4243-9e39-2f08b59499bf" (UID: "f2aca3e9-badc-4243-9e39-2f08b59499bf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:29:32.402824 kubelet[3070]: I1108 00:29:32.402715 3070 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2aca3e9-badc-4243-9e39-2f08b59499bf-kube-api-access-f6kn4" (OuterVolumeSpecName: "kube-api-access-f6kn4") pod "f2aca3e9-badc-4243-9e39-2f08b59499bf" (UID: "f2aca3e9-badc-4243-9e39-2f08b59499bf"). InnerVolumeSpecName "kube-api-access-f6kn4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:29:32.402824 kubelet[3070]: I1108 00:29:32.402784 3070 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2aca3e9-badc-4243-9e39-2f08b59499bf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f2aca3e9-badc-4243-9e39-2f08b59499bf" (UID: "f2aca3e9-badc-4243-9e39-2f08b59499bf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:29:32.498387 kubelet[3070]: I1108 00:29:32.498291 3070 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2aca3e9-badc-4243-9e39-2f08b59499bf-whisker-ca-bundle\") on node \"ci-4081.3.6-n-8b27c00582\" DevicePath \"\"" Nov 8 00:29:32.498387 kubelet[3070]: I1108 00:29:32.498349 3070 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f6kn4\" (UniqueName: \"kubernetes.io/projected/f2aca3e9-badc-4243-9e39-2f08b59499bf-kube-api-access-f6kn4\") on node \"ci-4081.3.6-n-8b27c00582\" DevicePath \"\"" Nov 8 00:29:32.498387 kubelet[3070]: I1108 00:29:32.498382 3070 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2aca3e9-badc-4243-9e39-2f08b59499bf-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-8b27c00582\" DevicePath \"\"" Nov 8 00:29:32.806090 systemd[1]: Removed slice kubepods-besteffort-podf2aca3e9_badc_4243_9e39_2f08b59499bf.slice - libcontainer container kubepods-besteffort-podf2aca3e9_badc_4243_9e39_2f08b59499bf.slice. Nov 8 00:29:32.829212 kubelet[3070]: I1108 00:29:32.829079 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zrzlx" podStartSLOduration=2.059708371 podStartE2EDuration="13.82904906s" podCreationTimestamp="2025-11-08 00:29:19 +0000 UTC" firstStartedPulling="2025-11-08 00:29:20.300411924 +0000 UTC m=+18.676528755" lastFinishedPulling="2025-11-08 00:29:32.069752617 +0000 UTC m=+30.445869444" observedRunningTime="2025-11-08 00:29:32.828900015 +0000 UTC m=+31.205016912" watchObservedRunningTime="2025-11-08 00:29:32.82904906 +0000 UTC m=+31.205165933" Nov 8 00:29:32.889691 systemd[1]: Created slice kubepods-besteffort-pod225a8bd8_1a26_4c77_ba47_4755836593e3.slice - libcontainer container kubepods-besteffort-pod225a8bd8_1a26_4c77_ba47_4755836593e3.slice. Nov 8 00:29:32.900930 kubelet[3070]: I1108 00:29:32.900893 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn2pl\" (UniqueName: \"kubernetes.io/projected/225a8bd8-1a26-4c77-ba47-4755836593e3-kube-api-access-wn2pl\") pod \"whisker-8665b9889f-q5txb\" (UID: \"225a8bd8-1a26-4c77-ba47-4755836593e3\") " pod="calico-system/whisker-8665b9889f-q5txb" Nov 8 00:29:32.901041 kubelet[3070]: I1108 00:29:32.900988 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/225a8bd8-1a26-4c77-ba47-4755836593e3-whisker-backend-key-pair\") pod \"whisker-8665b9889f-q5txb\" (UID: \"225a8bd8-1a26-4c77-ba47-4755836593e3\") " pod="calico-system/whisker-8665b9889f-q5txb" Nov 8 00:29:32.901041 kubelet[3070]: I1108 00:29:32.901013 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/225a8bd8-1a26-4c77-ba47-4755836593e3-whisker-ca-bundle\") pod \"whisker-8665b9889f-q5txb\" (UID: \"225a8bd8-1a26-4c77-ba47-4755836593e3\") " pod="calico-system/whisker-8665b9889f-q5txb" Nov 8 00:29:33.065682 systemd[1]: run-netns-cni\x2d5785c896\x2d0ba9\x2dc173\x2d5657\x2df4ba479a4ae0.mount: Deactivated successfully. Nov 8 00:29:33.065904 systemd[1]: var-lib-kubelet-pods-f2aca3e9\x2dbadc\x2d4243\x2d9e39\x2d2f08b59499bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df6kn4.mount: Deactivated successfully. Nov 8 00:29:33.066125 systemd[1]: var-lib-kubelet-pods-f2aca3e9\x2dbadc\x2d4243\x2d9e39\x2d2f08b59499bf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:29:33.195634 containerd[1809]: time="2025-11-08T00:29:33.195588735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8665b9889f-q5txb,Uid:225a8bd8-1a26-4c77-ba47-4755836593e3,Namespace:calico-system,Attempt:0,}" Nov 8 00:29:33.269180 systemd-networkd[1505]: cali54f78613831: Link UP Nov 8 00:29:33.269395 systemd-networkd[1505]: cali54f78613831: Gained carrier Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.211 [INFO][4711] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.218 [INFO][4711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8b27c00582-k8s-whisker--8665b9889f--q5txb-eth0 whisker-8665b9889f- calico-system 225a8bd8-1a26-4c77-ba47-4755836593e3 865 0 2025-11-08 00:29:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8665b9889f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-8b27c00582 whisker-8665b9889f-q5txb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali54f78613831 [] [] }} ContainerID="0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" Namespace="calico-system" Pod="whisker-8665b9889f-q5txb" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-whisker--8665b9889f--q5txb-" Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.218 [INFO][4711] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" Namespace="calico-system" Pod="whisker-8665b9889f-q5txb" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-whisker--8665b9889f--q5txb-eth0" Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.232 [INFO][4735] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" HandleID="k8s-pod-network.0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" Workload="ci--4081.3.6--n--8b27c00582-k8s-whisker--8665b9889f--q5txb-eth0" Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.232 [INFO][4735] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" HandleID="k8s-pod-network.0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" Workload="ci--4081.3.6--n--8b27c00582-k8s-whisker--8665b9889f--q5txb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001394b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8b27c00582", "pod":"whisker-8665b9889f-q5txb", "timestamp":"2025-11-08 00:29:33.232334707 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8b27c00582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.232 [INFO][4735] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.232 [INFO][4735] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.232 [INFO][4735] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8b27c00582' Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.237 [INFO][4735] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.240 [INFO][4735] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.244 [INFO][4735] ipam/ipam.go 511: Trying affinity for 192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.245 [INFO][4735] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.247 [INFO][4735] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.247 [INFO][4735] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.128/26 handle="k8s-pod-network.0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.248 [INFO][4735] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.250 [INFO][4735] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.128/26 handle="k8s-pod-network.0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.253 [INFO][4735] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.129/26] block=192.168.37.128/26 handle="k8s-pod-network.0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.253 [INFO][4735] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.129/26] handle="k8s-pod-network.0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.253 [INFO][4735] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:33.278572 containerd[1809]: 2025-11-08 00:29:33.253 [INFO][4735] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.129/26] IPv6=[] ContainerID="0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" HandleID="k8s-pod-network.0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" Workload="ci--4081.3.6--n--8b27c00582-k8s-whisker--8665b9889f--q5txb-eth0" Nov 8 00:29:33.279440 containerd[1809]: 2025-11-08 00:29:33.254 [INFO][4711] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" Namespace="calico-system" Pod="whisker-8665b9889f-q5txb" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-whisker--8665b9889f--q5txb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-whisker--8665b9889f--q5txb-eth0", GenerateName:"whisker-8665b9889f-", Namespace:"calico-system", SelfLink:"", UID:"225a8bd8-1a26-4c77-ba47-4755836593e3", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8665b9889f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"", Pod:"whisker-8665b9889f-q5txb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.37.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali54f78613831", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:33.279440 containerd[1809]: 2025-11-08 00:29:33.255 [INFO][4711] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.129/32] ContainerID="0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" Namespace="calico-system" Pod="whisker-8665b9889f-q5txb" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-whisker--8665b9889f--q5txb-eth0" Nov 8 00:29:33.279440 containerd[1809]: 2025-11-08 00:29:33.255 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54f78613831 ContainerID="0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" Namespace="calico-system" Pod="whisker-8665b9889f-q5txb" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-whisker--8665b9889f--q5txb-eth0" Nov 8 00:29:33.279440 containerd[1809]: 2025-11-08 00:29:33.269 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" Namespace="calico-system" Pod="whisker-8665b9889f-q5txb" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-whisker--8665b9889f--q5txb-eth0" Nov 8 00:29:33.279440 containerd[1809]: 2025-11-08 00:29:33.269 [INFO][4711] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" Namespace="calico-system" Pod="whisker-8665b9889f-q5txb" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-whisker--8665b9889f--q5txb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-whisker--8665b9889f--q5txb-eth0", GenerateName:"whisker-8665b9889f-", Namespace:"calico-system", SelfLink:"", UID:"225a8bd8-1a26-4c77-ba47-4755836593e3", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8665b9889f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a", Pod:"whisker-8665b9889f-q5txb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.37.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali54f78613831", MAC:"3e:14:ea:84:a1:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:33.279440 containerd[1809]: 2025-11-08 00:29:33.276 [INFO][4711] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a" Namespace="calico-system" Pod="whisker-8665b9889f-q5txb" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-whisker--8665b9889f--q5txb-eth0" Nov 8 00:29:33.288817 containerd[1809]: time="2025-11-08T00:29:33.288533417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:33.288817 containerd[1809]: time="2025-11-08T00:29:33.288781145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:33.288817 containerd[1809]: time="2025-11-08T00:29:33.288790156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:33.288922 containerd[1809]: time="2025-11-08T00:29:33.288834097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:33.310358 systemd[1]: Started cri-containerd-0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a.scope - libcontainer container 0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a. Nov 8 00:29:33.339287 containerd[1809]: time="2025-11-08T00:29:33.339235630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8665b9889f-q5txb,Uid:225a8bd8-1a26-4c77-ba47-4755836593e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"0e2c915f068209de96942af91aa04044f40541eeb15568cd83de95b219cccd5a\"" Nov 8 00:29:33.340141 containerd[1809]: time="2025-11-08T00:29:33.340094280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:29:33.678080 kubelet[3070]: I1108 00:29:33.677898 3070 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2aca3e9-badc-4243-9e39-2f08b59499bf" path="/var/lib/kubelet/pods/f2aca3e9-badc-4243-9e39-2f08b59499bf/volumes" Nov 8 00:29:33.754195 containerd[1809]: time="2025-11-08T00:29:33.754026901Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:33.755118 containerd[1809]: time="2025-11-08T00:29:33.755043160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:29:33.755118 containerd[1809]: time="2025-11-08T00:29:33.755100405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:29:33.755308 kubelet[3070]: E1108 00:29:33.755255 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:29:33.755308 kubelet[3070]: E1108 00:29:33.755286 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:29:33.755365 kubelet[3070]: E1108 00:29:33.755335 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8665b9889f-q5txb_calico-system(225a8bd8-1a26-4c77-ba47-4755836593e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:33.756002 containerd[1809]: time="2025-11-08T00:29:33.755941171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:29:33.802062 kubelet[3070]: I1108 00:29:33.802006 3070 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:29:34.170802 containerd[1809]: time="2025-11-08T00:29:34.170729249Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:34.171575 containerd[1809]: time="2025-11-08T00:29:34.171473008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:29:34.171575 containerd[1809]: time="2025-11-08T00:29:34.171538091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:29:34.171754 kubelet[3070]: E1108 00:29:34.171705 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:29:34.171754 kubelet[3070]: E1108 00:29:34.171732 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:29:34.171822 kubelet[3070]: E1108 00:29:34.171799 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8665b9889f-q5txb_calico-system(225a8bd8-1a26-4c77-ba47-4755836593e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:34.171840 kubelet[3070]: E1108 00:29:34.171824 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:29:34.407424 systemd-networkd[1505]: cali54f78613831: Gained IPv6LL Nov 8 00:29:34.808566 kubelet[3070]: E1108 00:29:34.808420 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:29:41.671387 containerd[1809]: time="2025-11-08T00:29:41.671319163Z" level=info msg="StopPodSandbox for \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\"" Nov 8 00:29:41.671680 containerd[1809]: time="2025-11-08T00:29:41.671557518Z" level=info msg="StopPodSandbox for \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\"" Nov 8 00:29:41.719669 containerd[1809]: 2025-11-08 00:29:41.702 [INFO][5306] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Nov 8 00:29:41.719669 containerd[1809]: 2025-11-08 00:29:41.702 [INFO][5306] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" iface="eth0" netns="/var/run/netns/cni-200f1b05-7aa0-993f-f7e2-68f90fb32fd9" Nov 8 00:29:41.719669 containerd[1809]: 2025-11-08 00:29:41.702 [INFO][5306] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" iface="eth0" netns="/var/run/netns/cni-200f1b05-7aa0-993f-f7e2-68f90fb32fd9" Nov 8 00:29:41.719669 containerd[1809]: 2025-11-08 00:29:41.702 [INFO][5306] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" iface="eth0" netns="/var/run/netns/cni-200f1b05-7aa0-993f-f7e2-68f90fb32fd9" Nov 8 00:29:41.719669 containerd[1809]: 2025-11-08 00:29:41.702 [INFO][5306] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Nov 8 00:29:41.719669 containerd[1809]: 2025-11-08 00:29:41.702 [INFO][5306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Nov 8 00:29:41.719669 containerd[1809]: 2025-11-08 00:29:41.713 [INFO][5351] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" HandleID="k8s-pod-network.956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Workload="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:29:41.719669 containerd[1809]: 2025-11-08 00:29:41.713 [INFO][5351] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:41.719669 containerd[1809]: 2025-11-08 00:29:41.713 [INFO][5351] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:41.719669 containerd[1809]: 2025-11-08 00:29:41.717 [WARNING][5351] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" HandleID="k8s-pod-network.956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Workload="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:29:41.719669 containerd[1809]: 2025-11-08 00:29:41.717 [INFO][5351] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" HandleID="k8s-pod-network.956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Workload="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:29:41.719669 containerd[1809]: 2025-11-08 00:29:41.718 [INFO][5351] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:41.719669 containerd[1809]: 2025-11-08 00:29:41.718 [INFO][5306] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Nov 8 00:29:41.720064 containerd[1809]: time="2025-11-08T00:29:41.720043782Z" level=info msg="TearDown network for sandbox \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\" successfully" Nov 8 00:29:41.720064 containerd[1809]: time="2025-11-08T00:29:41.720061207Z" level=info msg="StopPodSandbox for \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\" returns successfully" Nov 8 00:29:41.721251 containerd[1809]: time="2025-11-08T00:29:41.721234924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t42z5,Uid:d510fe8b-db97-40db-ab28-3634909f38a6,Namespace:calico-system,Attempt:1,}" Nov 8 00:29:41.721964 systemd[1]: run-netns-cni\x2d200f1b05\x2d7aa0\x2d993f\x2df7e2\x2d68f90fb32fd9.mount: Deactivated successfully. Nov 8 00:29:41.726921 containerd[1809]: 2025-11-08 00:29:41.701 [INFO][5307] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Nov 8 00:29:41.726921 containerd[1809]: 2025-11-08 00:29:41.701 [INFO][5307] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" iface="eth0" netns="/var/run/netns/cni-e96caba4-44ec-70ff-e094-44d16504a444" Nov 8 00:29:41.726921 containerd[1809]: 2025-11-08 00:29:41.701 [INFO][5307] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" iface="eth0" netns="/var/run/netns/cni-e96caba4-44ec-70ff-e094-44d16504a444" Nov 8 00:29:41.726921 containerd[1809]: 2025-11-08 00:29:41.701 [INFO][5307] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" iface="eth0" netns="/var/run/netns/cni-e96caba4-44ec-70ff-e094-44d16504a444" Nov 8 00:29:41.726921 containerd[1809]: 2025-11-08 00:29:41.701 [INFO][5307] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Nov 8 00:29:41.726921 containerd[1809]: 2025-11-08 00:29:41.701 [INFO][5307] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Nov 8 00:29:41.726921 containerd[1809]: 2025-11-08 00:29:41.713 [INFO][5348] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" HandleID="k8s-pod-network.cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:29:41.726921 containerd[1809]: 2025-11-08 00:29:41.713 [INFO][5348] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:41.726921 containerd[1809]: 2025-11-08 00:29:41.718 [INFO][5348] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:41.726921 containerd[1809]: 2025-11-08 00:29:41.723 [WARNING][5348] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" HandleID="k8s-pod-network.cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:29:41.726921 containerd[1809]: 2025-11-08 00:29:41.723 [INFO][5348] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" HandleID="k8s-pod-network.cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:29:41.726921 containerd[1809]: 2025-11-08 00:29:41.724 [INFO][5348] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:41.726921 containerd[1809]: 2025-11-08 00:29:41.725 [INFO][5307] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Nov 8 00:29:41.727237 containerd[1809]: time="2025-11-08T00:29:41.727007866Z" level=info msg="TearDown network for sandbox \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\" successfully" Nov 8 00:29:41.727237 containerd[1809]: time="2025-11-08T00:29:41.727031975Z" level=info msg="StopPodSandbox for \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\" returns successfully" Nov 8 00:29:41.728244 containerd[1809]: time="2025-11-08T00:29:41.728224330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8d496dff-jlg6z,Uid:7c46dfff-678e-44bc-9089-cef43e8fa0d3,Namespace:calico-system,Attempt:1,}" Nov 8 00:29:41.731195 systemd[1]: run-netns-cni\x2de96caba4\x2d44ec\x2d70ff\x2de094\x2d44d16504a444.mount: Deactivated successfully. Nov 8 00:29:41.781680 systemd-networkd[1505]: cali8fb6fbf2e59: Link UP Nov 8 00:29:41.781794 systemd-networkd[1505]: cali8fb6fbf2e59: Gained carrier Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.739 [INFO][5401] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.747 [INFO][5401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0 goldmane-7c778bb748- calico-system d510fe8b-db97-40db-ab28-3634909f38a6 903 0 2025-11-08 00:29:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-8b27c00582 goldmane-7c778bb748-t42z5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8fb6fbf2e59 [] [] }} ContainerID="d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" Namespace="calico-system" Pod="goldmane-7c778bb748-t42z5" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-" Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.747 [INFO][5401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" Namespace="calico-system" Pod="goldmane-7c778bb748-t42z5" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.763 [INFO][5451] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" HandleID="k8s-pod-network.d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" Workload="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.763 [INFO][5451] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" HandleID="k8s-pod-network.d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" Workload="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8b27c00582", "pod":"goldmane-7c778bb748-t42z5", "timestamp":"2025-11-08 00:29:41.763186919 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8b27c00582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.763 [INFO][5451] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.763 [INFO][5451] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.763 [INFO][5451] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8b27c00582' Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.767 [INFO][5451] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.770 [INFO][5451] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.772 [INFO][5451] ipam/ipam.go 511: Trying affinity for 192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.773 [INFO][5451] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.774 [INFO][5451] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.774 [INFO][5451] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.128/26 handle="k8s-pod-network.d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.775 [INFO][5451] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0 Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.777 [INFO][5451] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.128/26 handle="k8s-pod-network.d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.779 [INFO][5451] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.130/26] block=192.168.37.128/26 handle="k8s-pod-network.d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.779 [INFO][5451] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.130/26] handle="k8s-pod-network.d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.779 [INFO][5451] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:41.787694 containerd[1809]: 2025-11-08 00:29:41.779 [INFO][5451] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.130/26] IPv6=[] ContainerID="d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" HandleID="k8s-pod-network.d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" Workload="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:29:41.788088 containerd[1809]: 2025-11-08 00:29:41.780 [INFO][5401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" Namespace="calico-system" Pod="goldmane-7c778bb748-t42z5" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d510fe8b-db97-40db-ab28-3634909f38a6", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"", Pod:"goldmane-7c778bb748-t42z5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8fb6fbf2e59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:41.788088 containerd[1809]: 2025-11-08 00:29:41.780 [INFO][5401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.130/32] ContainerID="d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" Namespace="calico-system" Pod="goldmane-7c778bb748-t42z5" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:29:41.788088 containerd[1809]: 2025-11-08 00:29:41.780 [INFO][5401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8fb6fbf2e59 ContainerID="d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" Namespace="calico-system" Pod="goldmane-7c778bb748-t42z5" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:29:41.788088 containerd[1809]: 2025-11-08 00:29:41.781 [INFO][5401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" Namespace="calico-system" Pod="goldmane-7c778bb748-t42z5" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:29:41.788088 containerd[1809]: 2025-11-08 00:29:41.781 [INFO][5401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" Namespace="calico-system" Pod="goldmane-7c778bb748-t42z5" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d510fe8b-db97-40db-ab28-3634909f38a6", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0", Pod:"goldmane-7c778bb748-t42z5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8fb6fbf2e59", MAC:"be:12:bd:a7:67:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:41.788088 containerd[1809]: 2025-11-08 00:29:41.786 [INFO][5401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0" Namespace="calico-system" Pod="goldmane-7c778bb748-t42z5" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:29:41.796490 containerd[1809]: time="2025-11-08T00:29:41.796208363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:41.796490 containerd[1809]: time="2025-11-08T00:29:41.796403319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:41.796490 containerd[1809]: time="2025-11-08T00:29:41.796411551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:41.796610 containerd[1809]: time="2025-11-08T00:29:41.796488136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:41.815443 systemd[1]: Started cri-containerd-d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0.scope - libcontainer container d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0. Nov 8 00:29:41.842371 containerd[1809]: time="2025-11-08T00:29:41.842344629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t42z5,Uid:d510fe8b-db97-40db-ab28-3634909f38a6,Namespace:calico-system,Attempt:1,} returns sandbox id \"d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0\"" Nov 8 00:29:41.843231 containerd[1809]: time="2025-11-08T00:29:41.843215709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:29:41.922619 systemd-networkd[1505]: cali5ed9eddce0e: Link UP Nov 8 00:29:41.923267 systemd-networkd[1505]: cali5ed9eddce0e: Gained carrier Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.743 [INFO][5415] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.749 [INFO][5415] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0 calico-kube-controllers-7c8d496dff- calico-system 7c46dfff-678e-44bc-9089-cef43e8fa0d3 902 0 2025-11-08 00:29:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c8d496dff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-8b27c00582 calico-kube-controllers-7c8d496dff-jlg6z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5ed9eddce0e [] [] }} ContainerID="3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" Namespace="calico-system" Pod="calico-kube-controllers-7c8d496dff-jlg6z" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-" Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.749 [INFO][5415] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" Namespace="calico-system" Pod="calico-kube-controllers-7c8d496dff-jlg6z" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.763 [INFO][5457] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" HandleID="k8s-pod-network.3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.763 [INFO][5457] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" HandleID="k8s-pod-network.3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000599620), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8b27c00582", "pod":"calico-kube-controllers-7c8d496dff-jlg6z", "timestamp":"2025-11-08 00:29:41.763762572 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8b27c00582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.763 [INFO][5457] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.779 [INFO][5457] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.779 [INFO][5457] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8b27c00582' Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.870 [INFO][5457] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.878 [INFO][5457] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.886 [INFO][5457] ipam/ipam.go 511: Trying affinity for 192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.890 [INFO][5457] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.895 [INFO][5457] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.895 [INFO][5457] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.128/26 handle="k8s-pod-network.3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.898 [INFO][5457] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.904 [INFO][5457] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.128/26 handle="k8s-pod-network.3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.914 [INFO][5457] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.131/26] block=192.168.37.128/26 handle="k8s-pod-network.3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.914 [INFO][5457] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.131/26] handle="k8s-pod-network.3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.914 [INFO][5457] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:41.939059 containerd[1809]: 2025-11-08 00:29:41.914 [INFO][5457] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.131/26] IPv6=[] ContainerID="3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" HandleID="k8s-pod-network.3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:29:41.939592 containerd[1809]: 2025-11-08 00:29:41.918 [INFO][5415] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" Namespace="calico-system" Pod="calico-kube-controllers-7c8d496dff-jlg6z" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0", GenerateName:"calico-kube-controllers-7c8d496dff-", Namespace:"calico-system", SelfLink:"", UID:"7c46dfff-678e-44bc-9089-cef43e8fa0d3", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c8d496dff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"", Pod:"calico-kube-controllers-7c8d496dff-jlg6z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ed9eddce0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:41.939592 containerd[1809]: 2025-11-08 00:29:41.918 [INFO][5415] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.131/32] ContainerID="3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" Namespace="calico-system" Pod="calico-kube-controllers-7c8d496dff-jlg6z" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:29:41.939592 containerd[1809]: 2025-11-08 00:29:41.919 [INFO][5415] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ed9eddce0e ContainerID="3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" Namespace="calico-system" Pod="calico-kube-controllers-7c8d496dff-jlg6z" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:29:41.939592 containerd[1809]: 2025-11-08 00:29:41.923 [INFO][5415] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" Namespace="calico-system" Pod="calico-kube-controllers-7c8d496dff-jlg6z" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:29:41.939592 containerd[1809]: 2025-11-08 00:29:41.924 [INFO][5415] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" Namespace="calico-system" Pod="calico-kube-controllers-7c8d496dff-jlg6z" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0", GenerateName:"calico-kube-controllers-7c8d496dff-", Namespace:"calico-system", SelfLink:"", UID:"7c46dfff-678e-44bc-9089-cef43e8fa0d3", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c8d496dff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c", Pod:"calico-kube-controllers-7c8d496dff-jlg6z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ed9eddce0e", MAC:"ba:cf:f8:ac:ef:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:41.939592 containerd[1809]: 2025-11-08 00:29:41.937 [INFO][5415] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c" Namespace="calico-system" Pod="calico-kube-controllers-7c8d496dff-jlg6z" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:29:41.948518 containerd[1809]: time="2025-11-08T00:29:41.948255558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:41.948576 containerd[1809]: time="2025-11-08T00:29:41.948494350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:41.948576 containerd[1809]: time="2025-11-08T00:29:41.948504303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:41.948576 containerd[1809]: time="2025-11-08T00:29:41.948549501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:41.971381 systemd[1]: Started cri-containerd-3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c.scope - libcontainer container 3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c. Nov 8 00:29:41.999925 containerd[1809]: time="2025-11-08T00:29:41.999867307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c8d496dff-jlg6z,Uid:7c46dfff-678e-44bc-9089-cef43e8fa0d3,Namespace:calico-system,Attempt:1,} returns sandbox id \"3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c\"" Nov 8 00:29:42.214699 containerd[1809]: time="2025-11-08T00:29:42.214572446Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:42.215689 containerd[1809]: time="2025-11-08T00:29:42.215598239Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:29:42.215758 containerd[1809]: time="2025-11-08T00:29:42.215687386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:29:42.215899 kubelet[3070]: E1108 00:29:42.215841 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:29:42.216116 kubelet[3070]: E1108 00:29:42.215900 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:29:42.216116 kubelet[3070]: E1108 00:29:42.216023 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t42z5_calico-system(d510fe8b-db97-40db-ab28-3634909f38a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:42.216116 kubelet[3070]: E1108 00:29:42.216046 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:29:42.216268 containerd[1809]: time="2025-11-08T00:29:42.216159899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:29:42.611800 containerd[1809]: time="2025-11-08T00:29:42.611522620Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:42.612490 containerd[1809]: time="2025-11-08T00:29:42.612411976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:29:42.612527 containerd[1809]: time="2025-11-08T00:29:42.612475995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:29:42.612669 kubelet[3070]: E1108 00:29:42.612615 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:29:42.612669 kubelet[3070]: E1108 00:29:42.612648 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:29:42.612732 kubelet[3070]: E1108 00:29:42.612690 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c8d496dff-jlg6z_calico-system(7c46dfff-678e-44bc-9089-cef43e8fa0d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:42.612732 kubelet[3070]: E1108 00:29:42.612710 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:29:42.671745 containerd[1809]: time="2025-11-08T00:29:42.671691902Z" level=info msg="StopPodSandbox for \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\"" Nov 8 00:29:42.671977 containerd[1809]: time="2025-11-08T00:29:42.671801416Z" level=info msg="StopPodSandbox for \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\"" Nov 8 00:29:42.715654 containerd[1809]: 2025-11-08 00:29:42.697 [INFO][5605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Nov 8 00:29:42.715654 containerd[1809]: 2025-11-08 00:29:42.697 [INFO][5605] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" iface="eth0" netns="/var/run/netns/cni-88198362-cb77-29c5-6143-0c2365f12c65" Nov 8 00:29:42.715654 containerd[1809]: 2025-11-08 00:29:42.697 [INFO][5605] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" iface="eth0" netns="/var/run/netns/cni-88198362-cb77-29c5-6143-0c2365f12c65" Nov 8 00:29:42.715654 containerd[1809]: 2025-11-08 00:29:42.697 [INFO][5605] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" iface="eth0" netns="/var/run/netns/cni-88198362-cb77-29c5-6143-0c2365f12c65" Nov 8 00:29:42.715654 containerd[1809]: 2025-11-08 00:29:42.697 [INFO][5605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Nov 8 00:29:42.715654 containerd[1809]: 2025-11-08 00:29:42.697 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Nov 8 00:29:42.715654 containerd[1809]: 2025-11-08 00:29:42.708 [INFO][5638] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" HandleID="k8s-pod-network.5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:29:42.715654 containerd[1809]: 2025-11-08 00:29:42.708 [INFO][5638] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:42.715654 containerd[1809]: 2025-11-08 00:29:42.708 [INFO][5638] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:42.715654 containerd[1809]: 2025-11-08 00:29:42.713 [WARNING][5638] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" HandleID="k8s-pod-network.5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:29:42.715654 containerd[1809]: 2025-11-08 00:29:42.713 [INFO][5638] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" HandleID="k8s-pod-network.5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:29:42.715654 containerd[1809]: 2025-11-08 00:29:42.714 [INFO][5638] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:42.715654 containerd[1809]: 2025-11-08 00:29:42.714 [INFO][5605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Nov 8 00:29:42.715949 containerd[1809]: time="2025-11-08T00:29:42.715705244Z" level=info msg="TearDown network for sandbox \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\" successfully" Nov 8 00:29:42.715949 containerd[1809]: time="2025-11-08T00:29:42.715723702Z" level=info msg="StopPodSandbox for \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\" returns successfully" Nov 8 00:29:42.717221 containerd[1809]: time="2025-11-08T00:29:42.717176669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c6b5c5-rk6lq,Uid:a4457e65-0840-44a3-9b91-05cc2050df9f,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:29:42.720954 containerd[1809]: 2025-11-08 00:29:42.696 [INFO][5606] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Nov 8 00:29:42.720954 containerd[1809]: 2025-11-08 00:29:42.696 [INFO][5606] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" iface="eth0" netns="/var/run/netns/cni-f462968d-ab65-8906-9287-3161cd79c5e8" Nov 8 00:29:42.720954 containerd[1809]: 2025-11-08 00:29:42.696 [INFO][5606] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" iface="eth0" netns="/var/run/netns/cni-f462968d-ab65-8906-9287-3161cd79c5e8" Nov 8 00:29:42.720954 containerd[1809]: 2025-11-08 00:29:42.697 [INFO][5606] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" iface="eth0" netns="/var/run/netns/cni-f462968d-ab65-8906-9287-3161cd79c5e8" Nov 8 00:29:42.720954 containerd[1809]: 2025-11-08 00:29:42.697 [INFO][5606] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Nov 8 00:29:42.720954 containerd[1809]: 2025-11-08 00:29:42.697 [INFO][5606] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Nov 8 00:29:42.720954 containerd[1809]: 2025-11-08 00:29:42.708 [INFO][5636] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" HandleID="k8s-pod-network.e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Workload="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:29:42.720954 containerd[1809]: 2025-11-08 00:29:42.708 [INFO][5636] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:42.720954 containerd[1809]: 2025-11-08 00:29:42.714 [INFO][5636] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:42.720954 containerd[1809]: 2025-11-08 00:29:42.717 [WARNING][5636] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" HandleID="k8s-pod-network.e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Workload="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:29:42.720954 containerd[1809]: 2025-11-08 00:29:42.717 [INFO][5636] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" HandleID="k8s-pod-network.e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Workload="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:29:42.720954 containerd[1809]: 2025-11-08 00:29:42.718 [INFO][5636] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:42.720954 containerd[1809]: 2025-11-08 00:29:42.720 [INFO][5606] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Nov 8 00:29:42.721209 containerd[1809]: time="2025-11-08T00:29:42.720986562Z" level=info msg="TearDown network for sandbox \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\" successfully" Nov 8 00:29:42.721209 containerd[1809]: time="2025-11-08T00:29:42.721001966Z" level=info msg="StopPodSandbox for \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\" returns successfully" Nov 8 00:29:42.722121 containerd[1809]: time="2025-11-08T00:29:42.722076148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njlbj,Uid:2db14322-3de3-476c-bc43-59b2bd1acea4,Namespace:calico-system,Attempt:1,}" Nov 8 00:29:42.722877 systemd[1]: run-netns-cni\x2d88198362\x2dcb77\x2d29c5\x2d6143\x2d0c2365f12c65.mount: Deactivated successfully. Nov 8 00:29:42.725533 systemd[1]: run-netns-cni\x2df462968d\x2dab65\x2d8906\x2d9287\x2d3161cd79c5e8.mount: Deactivated successfully. Nov 8 00:29:42.776154 systemd-networkd[1505]: cali9ec7e4cb991: Link UP Nov 8 00:29:42.776259 systemd-networkd[1505]: cali9ec7e4cb991: Gained carrier Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.732 [INFO][5671] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.741 [INFO][5671] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0 calico-apiserver-6694c6b5c5- calico-apiserver a4457e65-0840-44a3-9b91-05cc2050df9f 922 0 2025-11-08 00:29:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6694c6b5c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-8b27c00582 calico-apiserver-6694c6b5c5-rk6lq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ec7e4cb991 [] [] }} ContainerID="02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-rk6lq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-" Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.742 [INFO][5671] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-rk6lq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.756 [INFO][5740] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" HandleID="k8s-pod-network.02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.756 [INFO][5740] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" HandleID="k8s-pod-network.02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033e4d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-8b27c00582", "pod":"calico-apiserver-6694c6b5c5-rk6lq", "timestamp":"2025-11-08 00:29:42.756464305 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8b27c00582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.756 [INFO][5740] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.756 [INFO][5740] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.756 [INFO][5740] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8b27c00582' Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.761 [INFO][5740] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.764 [INFO][5740] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.766 [INFO][5740] ipam/ipam.go 511: Trying affinity for 192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.767 [INFO][5740] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.768 [INFO][5740] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.768 [INFO][5740] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.128/26 handle="k8s-pod-network.02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.769 [INFO][5740] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6 Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.772 [INFO][5740] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.128/26 handle="k8s-pod-network.02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.774 [INFO][5740] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.132/26] block=192.168.37.128/26 handle="k8s-pod-network.02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.774 [INFO][5740] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.132/26] handle="k8s-pod-network.02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.774 [INFO][5740] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:42.782681 containerd[1809]: 2025-11-08 00:29:42.774 [INFO][5740] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.132/26] IPv6=[] ContainerID="02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" HandleID="k8s-pod-network.02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:29:42.783113 containerd[1809]: 2025-11-08 00:29:42.775 [INFO][5671] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-rk6lq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0", GenerateName:"calico-apiserver-6694c6b5c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4457e65-0840-44a3-9b91-05cc2050df9f", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c6b5c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"", Pod:"calico-apiserver-6694c6b5c5-rk6lq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ec7e4cb991", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:42.783113 containerd[1809]: 2025-11-08 00:29:42.775 [INFO][5671] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.132/32] ContainerID="02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-rk6lq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:29:42.783113 containerd[1809]: 2025-11-08 00:29:42.775 [INFO][5671] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ec7e4cb991 ContainerID="02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-rk6lq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:29:42.783113 containerd[1809]: 2025-11-08 00:29:42.776 [INFO][5671] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-rk6lq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:29:42.783113 containerd[1809]: 2025-11-08 00:29:42.776 [INFO][5671] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-rk6lq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0", GenerateName:"calico-apiserver-6694c6b5c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4457e65-0840-44a3-9b91-05cc2050df9f", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c6b5c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6", Pod:"calico-apiserver-6694c6b5c5-rk6lq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ec7e4cb991", MAC:"2e:ec:3e:a1:1c:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:42.783113 containerd[1809]: 2025-11-08 00:29:42.781 [INFO][5671] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-rk6lq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:29:42.791014 containerd[1809]: time="2025-11-08T00:29:42.790916357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:42.791014 containerd[1809]: time="2025-11-08T00:29:42.790978777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:42.791014 containerd[1809]: time="2025-11-08T00:29:42.790986355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:42.791131 containerd[1809]: time="2025-11-08T00:29:42.791026227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:42.817643 systemd[1]: Started cri-containerd-02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6.scope - libcontainer container 02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6. Nov 8 00:29:42.836966 kubelet[3070]: E1108 00:29:42.836875 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:29:42.839998 kubelet[3070]: E1108 00:29:42.839913 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:29:42.887214 systemd-networkd[1505]: cali85214257e4b: Link UP Nov 8 00:29:42.887466 systemd-networkd[1505]: cali85214257e4b: Gained carrier Nov 8 00:29:42.891682 containerd[1809]: time="2025-11-08T00:29:42.891657221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c6b5c5-rk6lq,Uid:a4457e65-0840-44a3-9b91-05cc2050df9f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6\"" Nov 8 00:29:42.892565 containerd[1809]: time="2025-11-08T00:29:42.892550469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.738 [INFO][5683] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.744 [INFO][5683] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0 csi-node-driver- calico-system 2db14322-3de3-476c-bc43-59b2bd1acea4 921 0 2025-11-08 00:29:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-8b27c00582 csi-node-driver-njlbj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali85214257e4b [] [] }} ContainerID="8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" Namespace="calico-system" Pod="csi-node-driver-njlbj" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-" Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.744 [INFO][5683] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" Namespace="calico-system" Pod="csi-node-driver-njlbj" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.759 [INFO][5746] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" HandleID="k8s-pod-network.8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" Workload="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.759 [INFO][5746] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" HandleID="k8s-pod-network.8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" Workload="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5610), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-8b27c00582", "pod":"csi-node-driver-njlbj", "timestamp":"2025-11-08 00:29:42.759034465 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8b27c00582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.759 [INFO][5746] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.774 [INFO][5746] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.774 [INFO][5746] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8b27c00582' Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.862 [INFO][5746] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.868 [INFO][5746] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.872 [INFO][5746] ipam/ipam.go 511: Trying affinity for 192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.874 [INFO][5746] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.876 [INFO][5746] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.876 [INFO][5746] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.128/26 handle="k8s-pod-network.8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.877 [INFO][5746] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.880 [INFO][5746] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.128/26 handle="k8s-pod-network.8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.884 [INFO][5746] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.133/26] block=192.168.37.128/26 handle="k8s-pod-network.8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.884 [INFO][5746] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.133/26] handle="k8s-pod-network.8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.884 [INFO][5746] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:42.894519 containerd[1809]: 2025-11-08 00:29:42.884 [INFO][5746] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.133/26] IPv6=[] ContainerID="8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" HandleID="k8s-pod-network.8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" Workload="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:29:42.894911 containerd[1809]: 2025-11-08 00:29:42.885 [INFO][5683] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" Namespace="calico-system" Pod="csi-node-driver-njlbj" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2db14322-3de3-476c-bc43-59b2bd1acea4", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"", Pod:"csi-node-driver-njlbj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali85214257e4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:42.894911 containerd[1809]: 2025-11-08 00:29:42.885 [INFO][5683] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.133/32] ContainerID="8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" Namespace="calico-system" Pod="csi-node-driver-njlbj" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:29:42.894911 containerd[1809]: 2025-11-08 00:29:42.885 [INFO][5683] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85214257e4b ContainerID="8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" Namespace="calico-system" Pod="csi-node-driver-njlbj" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:29:42.894911 containerd[1809]: 2025-11-08 00:29:42.887 [INFO][5683] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" Namespace="calico-system" Pod="csi-node-driver-njlbj" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:29:42.894911 containerd[1809]: 2025-11-08 00:29:42.887 [INFO][5683] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" Namespace="calico-system" Pod="csi-node-driver-njlbj" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2db14322-3de3-476c-bc43-59b2bd1acea4", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec", Pod:"csi-node-driver-njlbj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali85214257e4b", MAC:"4e:5c:67:c9:be:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:42.894911 containerd[1809]: 2025-11-08 00:29:42.893 [INFO][5683] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec" Namespace="calico-system" Pod="csi-node-driver-njlbj" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:29:42.902375 containerd[1809]: time="2025-11-08T00:29:42.902337032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:42.902375 containerd[1809]: time="2025-11-08T00:29:42.902367742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:42.902375 containerd[1809]: time="2025-11-08T00:29:42.902375258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:42.902485 containerd[1809]: time="2025-11-08T00:29:42.902414535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:42.922584 systemd[1]: Started cri-containerd-8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec.scope - libcontainer container 8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec. Nov 8 00:29:42.973469 containerd[1809]: time="2025-11-08T00:29:42.973393977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-njlbj,Uid:2db14322-3de3-476c-bc43-59b2bd1acea4,Namespace:calico-system,Attempt:1,} returns sandbox id \"8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec\"" Nov 8 00:29:43.267662 containerd[1809]: time="2025-11-08T00:29:43.267533663Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:43.268669 containerd[1809]: time="2025-11-08T00:29:43.268579897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:29:43.268717 containerd[1809]: time="2025-11-08T00:29:43.268666168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:29:43.268908 kubelet[3070]: E1108 00:29:43.268853 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:43.268908 kubelet[3070]: E1108 00:29:43.268907 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:43.269191 kubelet[3070]: E1108 00:29:43.269043 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6694c6b5c5-rk6lq_calico-apiserver(a4457e65-0840-44a3-9b91-05cc2050df9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:43.269191 kubelet[3070]: E1108 00:29:43.269092 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:29:43.269305 containerd[1809]: time="2025-11-08T00:29:43.269185013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:29:43.431354 systemd-networkd[1505]: cali8fb6fbf2e59: Gained IPv6LL Nov 8 00:29:43.648627 containerd[1809]: time="2025-11-08T00:29:43.648375439Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:43.649303 containerd[1809]: time="2025-11-08T00:29:43.649210203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:29:43.649365 containerd[1809]: time="2025-11-08T00:29:43.649289842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:29:43.649491 kubelet[3070]: E1108 00:29:43.649432 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:29:43.649491 kubelet[3070]: E1108 00:29:43.649488 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:29:43.649567 kubelet[3070]: E1108 00:29:43.649531 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-njlbj_calico-system(2db14322-3de3-476c-bc43-59b2bd1acea4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:43.650091 containerd[1809]: time="2025-11-08T00:29:43.650078647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:29:43.672377 containerd[1809]: time="2025-11-08T00:29:43.672351880Z" level=info msg="StopPodSandbox for \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\"" Nov 8 00:29:43.673457 containerd[1809]: time="2025-11-08T00:29:43.673435895Z" level=info msg="StopPodSandbox for \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\"" Nov 8 00:29:43.673512 containerd[1809]: time="2025-11-08T00:29:43.673475822Z" level=info msg="StopPodSandbox for \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\"" Nov 8 00:29:43.712939 containerd[1809]: 2025-11-08 00:29:43.697 [INFO][5910] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Nov 8 00:29:43.712939 containerd[1809]: 2025-11-08 00:29:43.697 [INFO][5910] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" iface="eth0" netns="/var/run/netns/cni-3c03635c-0b39-3510-3c01-b411f049c56e" Nov 8 00:29:43.712939 containerd[1809]: 2025-11-08 00:29:43.697 [INFO][5910] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" iface="eth0" netns="/var/run/netns/cni-3c03635c-0b39-3510-3c01-b411f049c56e" Nov 8 00:29:43.712939 containerd[1809]: 2025-11-08 00:29:43.697 [INFO][5910] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" iface="eth0" netns="/var/run/netns/cni-3c03635c-0b39-3510-3c01-b411f049c56e" Nov 8 00:29:43.712939 containerd[1809]: 2025-11-08 00:29:43.697 [INFO][5910] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Nov 8 00:29:43.712939 containerd[1809]: 2025-11-08 00:29:43.697 [INFO][5910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Nov 8 00:29:43.712939 containerd[1809]: 2025-11-08 00:29:43.707 [INFO][5955] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" HandleID="k8s-pod-network.225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:29:43.712939 containerd[1809]: 2025-11-08 00:29:43.707 [INFO][5955] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:43.712939 containerd[1809]: 2025-11-08 00:29:43.707 [INFO][5955] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:43.712939 containerd[1809]: 2025-11-08 00:29:43.710 [WARNING][5955] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" HandleID="k8s-pod-network.225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:29:43.712939 containerd[1809]: 2025-11-08 00:29:43.710 [INFO][5955] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" HandleID="k8s-pod-network.225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:29:43.712939 containerd[1809]: 2025-11-08 00:29:43.711 [INFO][5955] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:43.712939 containerd[1809]: 2025-11-08 00:29:43.712 [INFO][5910] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Nov 8 00:29:43.713225 containerd[1809]: time="2025-11-08T00:29:43.713013327Z" level=info msg="TearDown network for sandbox \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\" successfully" Nov 8 00:29:43.713225 containerd[1809]: time="2025-11-08T00:29:43.713028126Z" level=info msg="StopPodSandbox for \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\" returns successfully" Nov 8 00:29:43.714548 containerd[1809]: time="2025-11-08T00:29:43.714533221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nl8rk,Uid:24a3af85-008b-4d6d-85c6-e1f4e122242a,Namespace:kube-system,Attempt:1,}" Nov 8 00:29:43.717304 containerd[1809]: 2025-11-08 00:29:43.696 [INFO][5901] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Nov 8 00:29:43.717304 containerd[1809]: 2025-11-08 00:29:43.696 [INFO][5901] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" iface="eth0" netns="/var/run/netns/cni-84fb4f7d-e975-e701-d51a-a870c9c5d93e" Nov 8 00:29:43.717304 containerd[1809]: 2025-11-08 00:29:43.696 [INFO][5901] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" iface="eth0" netns="/var/run/netns/cni-84fb4f7d-e975-e701-d51a-a870c9c5d93e" Nov 8 00:29:43.717304 containerd[1809]: 2025-11-08 00:29:43.696 [INFO][5901] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" iface="eth0" netns="/var/run/netns/cni-84fb4f7d-e975-e701-d51a-a870c9c5d93e" Nov 8 00:29:43.717304 containerd[1809]: 2025-11-08 00:29:43.696 [INFO][5901] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Nov 8 00:29:43.717304 containerd[1809]: 2025-11-08 00:29:43.697 [INFO][5901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Nov 8 00:29:43.717304 containerd[1809]: 2025-11-08 00:29:43.707 [INFO][5953] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" HandleID="k8s-pod-network.452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:29:43.717304 containerd[1809]: 2025-11-08 00:29:43.707 [INFO][5953] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:43.717304 containerd[1809]: 2025-11-08 00:29:43.711 [INFO][5953] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:43.717304 containerd[1809]: 2025-11-08 00:29:43.714 [WARNING][5953] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" HandleID="k8s-pod-network.452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:29:43.717304 containerd[1809]: 2025-11-08 00:29:43.714 [INFO][5953] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" HandleID="k8s-pod-network.452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:29:43.717304 containerd[1809]: 2025-11-08 00:29:43.715 [INFO][5953] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:43.717304 containerd[1809]: 2025-11-08 00:29:43.716 [INFO][5901] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Nov 8 00:29:43.717781 containerd[1809]: time="2025-11-08T00:29:43.717369821Z" level=info msg="TearDown network for sandbox \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\" successfully" Nov 8 00:29:43.717781 containerd[1809]: time="2025-11-08T00:29:43.717381229Z" level=info msg="StopPodSandbox for \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\" returns successfully" Nov 8 00:29:43.718554 containerd[1809]: time="2025-11-08T00:29:43.718539118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7x794,Uid:610570e2-7f08-4a1f-b974-d26709be3c92,Namespace:kube-system,Attempt:1,}" Nov 8 00:29:43.723127 containerd[1809]: 2025-11-08 00:29:43.698 [INFO][5911] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Nov 8 00:29:43.723127 containerd[1809]: 2025-11-08 00:29:43.698 [INFO][5911] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" iface="eth0" netns="/var/run/netns/cni-b091a9e2-41ab-c382-9559-8582dfe8d798" Nov 8 00:29:43.723127 containerd[1809]: 2025-11-08 00:29:43.698 [INFO][5911] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" iface="eth0" netns="/var/run/netns/cni-b091a9e2-41ab-c382-9559-8582dfe8d798" Nov 8 00:29:43.723127 containerd[1809]: 2025-11-08 00:29:43.698 [INFO][5911] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" iface="eth0" netns="/var/run/netns/cni-b091a9e2-41ab-c382-9559-8582dfe8d798" Nov 8 00:29:43.723127 containerd[1809]: 2025-11-08 00:29:43.698 [INFO][5911] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Nov 8 00:29:43.723127 containerd[1809]: 2025-11-08 00:29:43.698 [INFO][5911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Nov 8 00:29:43.723127 containerd[1809]: 2025-11-08 00:29:43.708 [INFO][5965] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" HandleID="k8s-pod-network.ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:29:43.723127 containerd[1809]: 2025-11-08 00:29:43.708 [INFO][5965] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:43.723127 containerd[1809]: 2025-11-08 00:29:43.715 [INFO][5965] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:43.723127 containerd[1809]: 2025-11-08 00:29:43.719 [WARNING][5965] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" HandleID="k8s-pod-network.ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:29:43.723127 containerd[1809]: 2025-11-08 00:29:43.720 [INFO][5965] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" HandleID="k8s-pod-network.ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:29:43.723127 containerd[1809]: 2025-11-08 00:29:43.721 [INFO][5965] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:43.723127 containerd[1809]: 2025-11-08 00:29:43.722 [INFO][5911] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Nov 8 00:29:43.723410 containerd[1809]: time="2025-11-08T00:29:43.723192824Z" level=info msg="TearDown network for sandbox \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\" successfully" Nov 8 00:29:43.723410 containerd[1809]: time="2025-11-08T00:29:43.723211587Z" level=info msg="StopPodSandbox for \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\" returns successfully" Nov 8 00:29:43.723360 systemd[1]: run-netns-cni\x2d84fb4f7d\x2de975\x2de701\x2dd51a\x2da870c9c5d93e.mount: Deactivated successfully. Nov 8 00:29:43.723418 systemd[1]: run-netns-cni\x2d3c03635c\x2d0b39\x2d3510\x2d3c01\x2db411f049c56e.mount: Deactivated successfully. Nov 8 00:29:43.724105 containerd[1809]: time="2025-11-08T00:29:43.724090248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c6b5c5-xb2cq,Uid:5ec5b66b-733a-489d-9c96-c95ce9255384,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:29:43.726457 systemd[1]: run-netns-cni\x2db091a9e2\x2d41ab\x2dc382\x2d9559\x2d8582dfe8d798.mount: Deactivated successfully. Nov 8 00:29:43.775733 systemd-networkd[1505]: calid20bea3e92b: Link UP Nov 8 00:29:43.776114 systemd-networkd[1505]: calid20bea3e92b: Gained carrier Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.730 [INFO][5984] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.738 [INFO][5984] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0 coredns-66bc5c9577- kube-system 24a3af85-008b-4d6d-85c6-e1f4e122242a 951 0 2025-11-08 00:29:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-8b27c00582 coredns-66bc5c9577-nl8rk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid20bea3e92b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" Namespace="kube-system" Pod="coredns-66bc5c9577-nl8rk" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-" Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.738 [INFO][5984] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" Namespace="kube-system" Pod="coredns-66bc5c9577-nl8rk" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.752 [INFO][6058] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" HandleID="k8s-pod-network.707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.752 [INFO][6058] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" HandleID="k8s-pod-network.707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0007805f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-8b27c00582", "pod":"coredns-66bc5c9577-nl8rk", "timestamp":"2025-11-08 00:29:43.752772721 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8b27c00582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.752 [INFO][6058] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.752 [INFO][6058] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.752 [INFO][6058] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8b27c00582' Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.757 [INFO][6058] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.761 [INFO][6058] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.764 [INFO][6058] ipam/ipam.go 511: Trying affinity for 192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.765 [INFO][6058] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.767 [INFO][6058] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.767 [INFO][6058] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.128/26 handle="k8s-pod-network.707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.768 [INFO][6058] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047 Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.770 [INFO][6058] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.128/26 handle="k8s-pod-network.707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.773 [INFO][6058] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.134/26] block=192.168.37.128/26 handle="k8s-pod-network.707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.773 [INFO][6058] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.134/26] handle="k8s-pod-network.707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.773 [INFO][6058] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:43.781892 containerd[1809]: 2025-11-08 00:29:43.773 [INFO][6058] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.134/26] IPv6=[] ContainerID="707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" HandleID="k8s-pod-network.707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:29:43.782576 containerd[1809]: 2025-11-08 00:29:43.774 [INFO][5984] cni-plugin/k8s.go 418: Populated endpoint ContainerID="707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" Namespace="kube-system" Pod="coredns-66bc5c9577-nl8rk" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"24a3af85-008b-4d6d-85c6-e1f4e122242a", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"", Pod:"coredns-66bc5c9577-nl8rk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid20bea3e92b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:43.782576 containerd[1809]: 2025-11-08 00:29:43.774 [INFO][5984] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.134/32] ContainerID="707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" Namespace="kube-system" Pod="coredns-66bc5c9577-nl8rk" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:29:43.782576 containerd[1809]: 2025-11-08 00:29:43.774 [INFO][5984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid20bea3e92b ContainerID="707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" Namespace="kube-system" Pod="coredns-66bc5c9577-nl8rk" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:29:43.782576 containerd[1809]: 2025-11-08 00:29:43.775 [INFO][5984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" Namespace="kube-system" Pod="coredns-66bc5c9577-nl8rk" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:29:43.782739 containerd[1809]: 2025-11-08 00:29:43.776 [INFO][5984] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" Namespace="kube-system" Pod="coredns-66bc5c9577-nl8rk" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"24a3af85-008b-4d6d-85c6-e1f4e122242a", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047", Pod:"coredns-66bc5c9577-nl8rk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid20bea3e92b", MAC:"ee:44:02:4f:78:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:43.782739 containerd[1809]: 2025-11-08 00:29:43.780 [INFO][5984] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047" Namespace="kube-system" Pod="coredns-66bc5c9577-nl8rk" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:29:43.791461 containerd[1809]: time="2025-11-08T00:29:43.791410143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:43.791461 containerd[1809]: time="2025-11-08T00:29:43.791450454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:43.791575 containerd[1809]: time="2025-11-08T00:29:43.791468315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:43.791575 containerd[1809]: time="2025-11-08T00:29:43.791524189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:43.807315 systemd[1]: Started cri-containerd-707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047.scope - libcontainer container 707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047. Nov 8 00:29:43.830272 containerd[1809]: time="2025-11-08T00:29:43.830252052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nl8rk,Uid:24a3af85-008b-4d6d-85c6-e1f4e122242a,Namespace:kube-system,Attempt:1,} returns sandbox id \"707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047\"" Nov 8 00:29:43.832261 containerd[1809]: time="2025-11-08T00:29:43.832248031Z" level=info msg="CreateContainer within sandbox \"707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:29:43.836666 containerd[1809]: time="2025-11-08T00:29:43.836607166Z" level=info msg="CreateContainer within sandbox \"707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce43c07abdb107a928194bf8418be3ddaf6915907e9e0974ed0bf88331461280\"" Nov 8 00:29:43.836869 containerd[1809]: time="2025-11-08T00:29:43.836849204Z" level=info msg="StartContainer for \"ce43c07abdb107a928194bf8418be3ddaf6915907e9e0974ed0bf88331461280\"" Nov 8 00:29:43.840627 kubelet[3070]: E1108 00:29:43.840601 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:29:43.841308 kubelet[3070]: E1108 00:29:43.841292 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:29:43.841377 kubelet[3070]: E1108 00:29:43.841361 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:29:43.857316 systemd[1]: Started cri-containerd-ce43c07abdb107a928194bf8418be3ddaf6915907e9e0974ed0bf88331461280.scope - libcontainer container ce43c07abdb107a928194bf8418be3ddaf6915907e9e0974ed0bf88331461280. Nov 8 00:29:43.869479 containerd[1809]: time="2025-11-08T00:29:43.869455836Z" level=info msg="StartContainer for \"ce43c07abdb107a928194bf8418be3ddaf6915907e9e0974ed0bf88331461280\" returns successfully" Nov 8 00:29:43.874055 systemd-networkd[1505]: cali64cc8e0b3a1: Link UP Nov 8 00:29:43.874189 systemd-networkd[1505]: cali64cc8e0b3a1: Gained carrier Nov 8 00:29:43.878227 systemd-networkd[1505]: cali9ec7e4cb991: Gained IPv6LL Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.734 [INFO][5999] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.741 [INFO][5999] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0 coredns-66bc5c9577- kube-system 610570e2-7f08-4a1f-b974-d26709be3c92 950 0 2025-11-08 00:29:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-8b27c00582 coredns-66bc5c9577-7x794 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali64cc8e0b3a1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" Namespace="kube-system" Pod="coredns-66bc5c9577-7x794" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-" Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.741 [INFO][5999] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" Namespace="kube-system" Pod="coredns-66bc5c9577-7x794" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.754 [INFO][6065] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" HandleID="k8s-pod-network.d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.754 [INFO][6065] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" HandleID="k8s-pod-network.d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-8b27c00582", "pod":"coredns-66bc5c9577-7x794", "timestamp":"2025-11-08 00:29:43.75469305 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8b27c00582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.754 [INFO][6065] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.773 [INFO][6065] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.773 [INFO][6065] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8b27c00582' Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.858 [INFO][6065] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.861 [INFO][6065] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.864 [INFO][6065] ipam/ipam.go 511: Trying affinity for 192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.864 [INFO][6065] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.866 [INFO][6065] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.866 [INFO][6065] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.128/26 handle="k8s-pod-network.d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.867 [INFO][6065] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914 Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.868 [INFO][6065] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.128/26 handle="k8s-pod-network.d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.872 [INFO][6065] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.135/26] block=192.168.37.128/26 handle="k8s-pod-network.d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.872 [INFO][6065] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.135/26] handle="k8s-pod-network.d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.872 [INFO][6065] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:43.879964 containerd[1809]: 2025-11-08 00:29:43.872 [INFO][6065] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.135/26] IPv6=[] ContainerID="d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" HandleID="k8s-pod-network.d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:29:43.880398 containerd[1809]: 2025-11-08 00:29:43.873 [INFO][5999] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" Namespace="kube-system" Pod="coredns-66bc5c9577-7x794" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"610570e2-7f08-4a1f-b974-d26709be3c92", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"", Pod:"coredns-66bc5c9577-7x794", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64cc8e0b3a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:43.880398 containerd[1809]: 2025-11-08 00:29:43.873 [INFO][5999] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.135/32] ContainerID="d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" Namespace="kube-system" Pod="coredns-66bc5c9577-7x794" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:29:43.880398 containerd[1809]: 2025-11-08 00:29:43.873 [INFO][5999] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64cc8e0b3a1 ContainerID="d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" Namespace="kube-system" Pod="coredns-66bc5c9577-7x794" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:29:43.880398 containerd[1809]: 2025-11-08 00:29:43.874 [INFO][5999] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" Namespace="kube-system" Pod="coredns-66bc5c9577-7x794" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:29:43.880504 containerd[1809]: 2025-11-08 00:29:43.874 [INFO][5999] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" Namespace="kube-system" Pod="coredns-66bc5c9577-7x794" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"610570e2-7f08-4a1f-b974-d26709be3c92", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914", Pod:"coredns-66bc5c9577-7x794", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64cc8e0b3a1", MAC:"56:2c:2d:12:c6:b9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:43.880504 containerd[1809]: 2025-11-08 00:29:43.879 [INFO][5999] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914" Namespace="kube-system" Pod="coredns-66bc5c9577-7x794" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:29:43.888655 containerd[1809]: time="2025-11-08T00:29:43.888612010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:43.888831 containerd[1809]: time="2025-11-08T00:29:43.888818095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:43.888859 containerd[1809]: time="2025-11-08T00:29:43.888829345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:43.888884 containerd[1809]: time="2025-11-08T00:29:43.888873002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:43.906301 systemd[1]: Started cri-containerd-d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914.scope - libcontainer container d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914. Nov 8 00:29:43.928413 containerd[1809]: time="2025-11-08T00:29:43.928365218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7x794,Uid:610570e2-7f08-4a1f-b974-d26709be3c92,Namespace:kube-system,Attempt:1,} returns sandbox id \"d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914\"" Nov 8 00:29:43.930172 containerd[1809]: time="2025-11-08T00:29:43.930158397Z" level=info msg="CreateContainer within sandbox \"d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:29:43.934505 containerd[1809]: time="2025-11-08T00:29:43.934452080Z" level=info msg="CreateContainer within sandbox \"d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"64889facaf7f0738f4339c6b9ea6071a57680fe073a5af92828bb83a48db64ec\"" Nov 8 00:29:43.934643 containerd[1809]: time="2025-11-08T00:29:43.934631058Z" level=info msg="StartContainer for \"64889facaf7f0738f4339c6b9ea6071a57680fe073a5af92828bb83a48db64ec\"" Nov 8 00:29:43.943237 systemd-networkd[1505]: cali5ed9eddce0e: Gained IPv6LL Nov 8 00:29:43.954435 systemd[1]: Started cri-containerd-64889facaf7f0738f4339c6b9ea6071a57680fe073a5af92828bb83a48db64ec.scope - libcontainer container 64889facaf7f0738f4339c6b9ea6071a57680fe073a5af92828bb83a48db64ec. Nov 8 00:29:43.968107 containerd[1809]: time="2025-11-08T00:29:43.968084562Z" level=info msg="StartContainer for \"64889facaf7f0738f4339c6b9ea6071a57680fe073a5af92828bb83a48db64ec\" returns successfully" Nov 8 00:29:43.977818 systemd-networkd[1505]: caliabc02d287ae: Link UP Nov 8 00:29:43.977997 systemd-networkd[1505]: caliabc02d287ae: Gained carrier Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.739 [INFO][6023] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.746 [INFO][6023] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0 calico-apiserver-6694c6b5c5- calico-apiserver 5ec5b66b-733a-489d-9c96-c95ce9255384 952 0 2025-11-08 00:29:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6694c6b5c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-8b27c00582 calico-apiserver-6694c6b5c5-xb2cq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliabc02d287ae [] [] }} ContainerID="66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-xb2cq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-" Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.746 [INFO][6023] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-xb2cq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.760 [INFO][6084] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" HandleID="k8s-pod-network.66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.760 [INFO][6084] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" HandleID="k8s-pod-network.66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e75c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-8b27c00582", "pod":"calico-apiserver-6694c6b5c5-xb2cq", "timestamp":"2025-11-08 00:29:43.760277257 +0000 UTC"}, Hostname:"ci-4081.3.6-n-8b27c00582", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.760 [INFO][6084] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.872 [INFO][6084] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.872 [INFO][6084] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-8b27c00582' Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.959 [INFO][6084] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.962 [INFO][6084] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.965 [INFO][6084] ipam/ipam.go 511: Trying affinity for 192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.967 [INFO][6084] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.968 [INFO][6084] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.128/26 host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.968 [INFO][6084] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.37.128/26 handle="k8s-pod-network.66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.969 [INFO][6084] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.971 [INFO][6084] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.37.128/26 handle="k8s-pod-network.66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.975 [INFO][6084] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.37.136/26] block=192.168.37.128/26 handle="k8s-pod-network.66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.975 [INFO][6084] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.136/26] handle="k8s-pod-network.66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" host="ci-4081.3.6-n-8b27c00582" Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.975 [INFO][6084] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:29:44.006466 containerd[1809]: 2025-11-08 00:29:43.975 [INFO][6084] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.37.136/26] IPv6=[] ContainerID="66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" HandleID="k8s-pod-network.66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:29:44.006905 containerd[1809]: 2025-11-08 00:29:43.976 [INFO][6023] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-xb2cq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0", GenerateName:"calico-apiserver-6694c6b5c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ec5b66b-733a-489d-9c96-c95ce9255384", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c6b5c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"", Pod:"calico-apiserver-6694c6b5c5-xb2cq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliabc02d287ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:44.006905 containerd[1809]: 2025-11-08 00:29:43.976 [INFO][6023] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.136/32] ContainerID="66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-xb2cq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:29:44.006905 containerd[1809]: 2025-11-08 00:29:43.976 [INFO][6023] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliabc02d287ae ContainerID="66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-xb2cq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:29:44.006905 containerd[1809]: 2025-11-08 00:29:43.978 [INFO][6023] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-xb2cq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:29:44.006905 containerd[1809]: 2025-11-08 00:29:43.978 [INFO][6023] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-xb2cq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0", GenerateName:"calico-apiserver-6694c6b5c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ec5b66b-733a-489d-9c96-c95ce9255384", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c6b5c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b", Pod:"calico-apiserver-6694c6b5c5-xb2cq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliabc02d287ae", MAC:"2a:d9:76:50:a8:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:29:44.006905 containerd[1809]: 2025-11-08 00:29:44.005 [INFO][6023] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b" Namespace="calico-apiserver" Pod="calico-apiserver-6694c6b5c5-xb2cq" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:29:44.024730 containerd[1809]: time="2025-11-08T00:29:44.024674743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:29:44.024730 containerd[1809]: time="2025-11-08T00:29:44.024701167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:29:44.024730 containerd[1809]: time="2025-11-08T00:29:44.024708193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:44.024828 containerd[1809]: time="2025-11-08T00:29:44.024749214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:29:44.039120 containerd[1809]: time="2025-11-08T00:29:44.039098354Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:44.039562 containerd[1809]: time="2025-11-08T00:29:44.039541034Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:29:44.039606 containerd[1809]: time="2025-11-08T00:29:44.039582670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:29:44.039741 kubelet[3070]: E1108 00:29:44.039682 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:29:44.039741 kubelet[3070]: E1108 00:29:44.039709 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:29:44.039805 kubelet[3070]: E1108 00:29:44.039752 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-njlbj_calico-system(2db14322-3de3-476c-bc43-59b2bd1acea4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:44.039805 kubelet[3070]: E1108 00:29:44.039776 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:29:44.044360 systemd[1]: Started cri-containerd-66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b.scope - libcontainer container 66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b. Nov 8 00:29:44.070223 containerd[1809]: time="2025-11-08T00:29:44.070171736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6694c6b5c5-xb2cq,Uid:5ec5b66b-733a-489d-9c96-c95ce9255384,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b\"" Nov 8 00:29:44.070978 containerd[1809]: time="2025-11-08T00:29:44.070965559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:29:44.071228 systemd-networkd[1505]: cali85214257e4b: Gained IPv6LL Nov 8 00:29:44.440966 containerd[1809]: time="2025-11-08T00:29:44.440830641Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:44.441907 containerd[1809]: time="2025-11-08T00:29:44.441814397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:29:44.442000 containerd[1809]: time="2025-11-08T00:29:44.441904579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:29:44.442087 kubelet[3070]: E1108 00:29:44.442063 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:44.442292 kubelet[3070]: E1108 00:29:44.442093 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:44.442292 kubelet[3070]: E1108 00:29:44.442153 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6694c6b5c5-xb2cq_calico-apiserver(5ec5b66b-733a-489d-9c96-c95ce9255384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:44.442292 kubelet[3070]: E1108 00:29:44.442175 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:29:44.845243 kubelet[3070]: E1108 00:29:44.845097 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:29:44.848528 kubelet[3070]: E1108 00:29:44.848487 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:29:44.848792 kubelet[3070]: E1108 00:29:44.848755 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:29:44.861388 kubelet[3070]: I1108 00:29:44.861337 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7x794" podStartSLOduration=36.861320336 podStartE2EDuration="36.861320336s" podCreationTimestamp="2025-11-08 00:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:44.860901994 +0000 UTC m=+43.237018842" watchObservedRunningTime="2025-11-08 00:29:44.861320336 +0000 UTC m=+43.237437166" Nov 8 00:29:44.881232 kubelet[3070]: I1108 00:29:44.881202 3070 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nl8rk" podStartSLOduration=36.881191175 podStartE2EDuration="36.881191175s" podCreationTimestamp="2025-11-08 00:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:29:44.875233774 +0000 UTC m=+43.251350608" watchObservedRunningTime="2025-11-08 00:29:44.881191175 +0000 UTC m=+43.257308002" Nov 8 00:29:45.030673 systemd-networkd[1505]: cali64cc8e0b3a1: Gained IPv6LL Nov 8 00:29:45.414662 systemd-networkd[1505]: calid20bea3e92b: Gained IPv6LL Nov 8 00:29:45.853802 kubelet[3070]: E1108 00:29:45.853694 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:29:45.990366 systemd-networkd[1505]: caliabc02d287ae: Gained IPv6LL Nov 8 00:29:46.912167 kubelet[3070]: I1108 00:29:46.912089 3070 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:29:47.869224 kernel: bpftool[6566]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:29:48.028610 systemd-networkd[1505]: vxlan.calico: Link UP Nov 8 00:29:48.028613 systemd-networkd[1505]: vxlan.calico: Gained carrier Nov 8 00:29:48.675117 containerd[1809]: time="2025-11-08T00:29:48.675038775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:29:49.094572 containerd[1809]: time="2025-11-08T00:29:49.094492562Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:49.095004 containerd[1809]: time="2025-11-08T00:29:49.094925099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:29:49.095038 containerd[1809]: time="2025-11-08T00:29:49.094994074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:29:49.095089 kubelet[3070]: E1108 00:29:49.095055 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:29:49.095089 kubelet[3070]: E1108 00:29:49.095080 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:29:49.095306 kubelet[3070]: E1108 00:29:49.095127 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8665b9889f-q5txb_calico-system(225a8bd8-1a26-4c77-ba47-4755836593e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:49.095544 containerd[1809]: time="2025-11-08T00:29:49.095531456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:29:49.254277 systemd-networkd[1505]: vxlan.calico: Gained IPv6LL Nov 8 00:29:49.498711 containerd[1809]: time="2025-11-08T00:29:49.498592192Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:49.499548 containerd[1809]: time="2025-11-08T00:29:49.499521878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:29:49.499636 containerd[1809]: time="2025-11-08T00:29:49.499590400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:29:49.499766 kubelet[3070]: E1108 00:29:49.499693 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:29:49.499809 kubelet[3070]: E1108 00:29:49.499772 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:29:49.499844 kubelet[3070]: E1108 00:29:49.499822 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8665b9889f-q5txb_calico-system(225a8bd8-1a26-4c77-ba47-4755836593e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:49.499899 kubelet[3070]: E1108 00:29:49.499847 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:29:49.537838 kubelet[3070]: I1108 00:29:49.537731 3070 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:29:55.680388 containerd[1809]: time="2025-11-08T00:29:55.680275228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:29:56.088058 containerd[1809]: time="2025-11-08T00:29:56.087866727Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:56.088865 containerd[1809]: time="2025-11-08T00:29:56.088838570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:29:56.088960 containerd[1809]: time="2025-11-08T00:29:56.088902158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:29:56.089040 kubelet[3070]: E1108 00:29:56.089018 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:29:56.089284 kubelet[3070]: E1108 00:29:56.089047 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:29:56.089284 kubelet[3070]: E1108 00:29:56.089091 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-njlbj_calico-system(2db14322-3de3-476c-bc43-59b2bd1acea4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:56.089532 containerd[1809]: time="2025-11-08T00:29:56.089520090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:29:56.458049 containerd[1809]: time="2025-11-08T00:29:56.458012348Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:56.458694 containerd[1809]: time="2025-11-08T00:29:56.458626845Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:29:56.458732 containerd[1809]: time="2025-11-08T00:29:56.458683371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:29:56.458798 kubelet[3070]: E1108 00:29:56.458766 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:29:56.458847 kubelet[3070]: E1108 00:29:56.458809 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:29:56.458887 kubelet[3070]: E1108 00:29:56.458874 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-njlbj_calico-system(2db14322-3de3-476c-bc43-59b2bd1acea4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:56.458931 kubelet[3070]: E1108 00:29:56.458911 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:29:56.671644 containerd[1809]: time="2025-11-08T00:29:56.671617797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:29:57.087664 containerd[1809]: time="2025-11-08T00:29:57.087539737Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:57.088566 containerd[1809]: time="2025-11-08T00:29:57.088505066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:29:57.088609 containerd[1809]: time="2025-11-08T00:29:57.088572454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:29:57.088708 kubelet[3070]: E1108 00:29:57.088661 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:29:57.088708 kubelet[3070]: E1108 00:29:57.088694 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:29:57.088865 kubelet[3070]: E1108 00:29:57.088811 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c8d496dff-jlg6z_calico-system(7c46dfff-678e-44bc-9089-cef43e8fa0d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:57.088865 kubelet[3070]: E1108 00:29:57.088842 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:29:57.088932 containerd[1809]: time="2025-11-08T00:29:57.088892846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:29:57.469462 containerd[1809]: time="2025-11-08T00:29:57.469337941Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:57.470207 containerd[1809]: time="2025-11-08T00:29:57.470184246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:29:57.470368 containerd[1809]: time="2025-11-08T00:29:57.470273316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:29:57.470444 kubelet[3070]: E1108 00:29:57.470397 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:57.470661 kubelet[3070]: E1108 00:29:57.470462 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:29:57.470661 kubelet[3070]: E1108 00:29:57.470576 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6694c6b5c5-rk6lq_calico-apiserver(a4457e65-0840-44a3-9b91-05cc2050df9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:57.470661 kubelet[3070]: E1108 00:29:57.470603 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:29:57.470780 containerd[1809]: time="2025-11-08T00:29:57.470664086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:29:57.845612 containerd[1809]: time="2025-11-08T00:29:57.845413040Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:29:57.846304 containerd[1809]: time="2025-11-08T00:29:57.846278397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:29:57.846364 containerd[1809]: time="2025-11-08T00:29:57.846346653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:29:57.846464 kubelet[3070]: E1108 00:29:57.846441 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:29:57.846508 kubelet[3070]: E1108 00:29:57.846470 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:29:57.846529 kubelet[3070]: E1108 00:29:57.846519 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t42z5_calico-system(d510fe8b-db97-40db-ab28-3634909f38a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:29:57.846548 kubelet[3070]: E1108 00:29:57.846538 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:29:59.673894 containerd[1809]: time="2025-11-08T00:29:59.673828741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:30:00.042689 containerd[1809]: time="2025-11-08T00:30:00.042630307Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:00.043103 containerd[1809]: time="2025-11-08T00:30:00.043056511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:30:00.043142 containerd[1809]: time="2025-11-08T00:30:00.043100496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:30:00.043247 kubelet[3070]: E1108 00:30:00.043220 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:00.043453 kubelet[3070]: E1108 00:30:00.043253 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:00.043453 kubelet[3070]: E1108 00:30:00.043306 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6694c6b5c5-xb2cq_calico-apiserver(5ec5b66b-733a-489d-9c96-c95ce9255384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:00.043453 kubelet[3070]: E1108 00:30:00.043326 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:30:00.674301 kubelet[3070]: E1108 00:30:00.674185 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:30:01.666428 containerd[1809]: time="2025-11-08T00:30:01.666405511Z" level=info msg="StopPodSandbox for \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\"" Nov 8 00:30:01.713355 containerd[1809]: 2025-11-08 00:30:01.694 [WARNING][6772] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0", GenerateName:"calico-kube-controllers-7c8d496dff-", Namespace:"calico-system", SelfLink:"", UID:"7c46dfff-678e-44bc-9089-cef43e8fa0d3", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c8d496dff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c", Pod:"calico-kube-controllers-7c8d496dff-jlg6z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ed9eddce0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:01.713355 containerd[1809]: 2025-11-08 00:30:01.695 [INFO][6772] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Nov 8 00:30:01.713355 containerd[1809]: 2025-11-08 00:30:01.695 [INFO][6772] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" iface="eth0" netns="" Nov 8 00:30:01.713355 containerd[1809]: 2025-11-08 00:30:01.695 [INFO][6772] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Nov 8 00:30:01.713355 containerd[1809]: 2025-11-08 00:30:01.695 [INFO][6772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Nov 8 00:30:01.713355 containerd[1809]: 2025-11-08 00:30:01.706 [INFO][6789] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" HandleID="k8s-pod-network.cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:30:01.713355 containerd[1809]: 2025-11-08 00:30:01.706 [INFO][6789] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:01.713355 containerd[1809]: 2025-11-08 00:30:01.706 [INFO][6789] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:01.713355 containerd[1809]: 2025-11-08 00:30:01.710 [WARNING][6789] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" HandleID="k8s-pod-network.cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:30:01.713355 containerd[1809]: 2025-11-08 00:30:01.710 [INFO][6789] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" HandleID="k8s-pod-network.cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:30:01.713355 containerd[1809]: 2025-11-08 00:30:01.711 [INFO][6789] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:01.713355 containerd[1809]: 2025-11-08 00:30:01.712 [INFO][6772] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Nov 8 00:30:01.713846 containerd[1809]: time="2025-11-08T00:30:01.713389851Z" level=info msg="TearDown network for sandbox \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\" successfully" Nov 8 00:30:01.713846 containerd[1809]: time="2025-11-08T00:30:01.713407074Z" level=info msg="StopPodSandbox for \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\" returns successfully" Nov 8 00:30:01.713973 containerd[1809]: time="2025-11-08T00:30:01.713927616Z" level=info msg="RemovePodSandbox for \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\"" Nov 8 00:30:01.713993 containerd[1809]: time="2025-11-08T00:30:01.713980056Z" level=info msg="Forcibly stopping sandbox \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\"" Nov 8 00:30:01.752364 containerd[1809]: 2025-11-08 00:30:01.733 [WARNING][6815] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0", GenerateName:"calico-kube-controllers-7c8d496dff-", Namespace:"calico-system", SelfLink:"", UID:"7c46dfff-678e-44bc-9089-cef43e8fa0d3", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c8d496dff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"3ccd561983ae70fe025e5fa26a8a90d0177dba7e6816e26341d02b1411f96b9c", Pod:"calico-kube-controllers-7c8d496dff-jlg6z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5ed9eddce0e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:01.752364 containerd[1809]: 2025-11-08 00:30:01.733 [INFO][6815] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Nov 8 00:30:01.752364 containerd[1809]: 2025-11-08 00:30:01.733 [INFO][6815] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" iface="eth0" netns="" Nov 8 00:30:01.752364 containerd[1809]: 2025-11-08 00:30:01.733 [INFO][6815] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Nov 8 00:30:01.752364 containerd[1809]: 2025-11-08 00:30:01.733 [INFO][6815] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Nov 8 00:30:01.752364 containerd[1809]: 2025-11-08 00:30:01.743 [INFO][6834] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" HandleID="k8s-pod-network.cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:30:01.752364 containerd[1809]: 2025-11-08 00:30:01.743 [INFO][6834] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:01.752364 containerd[1809]: 2025-11-08 00:30:01.743 [INFO][6834] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:01.752364 containerd[1809]: 2025-11-08 00:30:01.748 [WARNING][6834] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" HandleID="k8s-pod-network.cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:30:01.752364 containerd[1809]: 2025-11-08 00:30:01.749 [INFO][6834] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" HandleID="k8s-pod-network.cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--kube--controllers--7c8d496dff--jlg6z-eth0" Nov 8 00:30:01.752364 containerd[1809]: 2025-11-08 00:30:01.750 [INFO][6834] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:01.752364 containerd[1809]: 2025-11-08 00:30:01.751 [INFO][6815] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f" Nov 8 00:30:01.752659 containerd[1809]: time="2025-11-08T00:30:01.752366383Z" level=info msg="TearDown network for sandbox \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\" successfully" Nov 8 00:30:01.753971 containerd[1809]: time="2025-11-08T00:30:01.753929143Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:01.753971 containerd[1809]: time="2025-11-08T00:30:01.753958150Z" level=info msg="RemovePodSandbox \"cd401c2c80319c40fefb44cb5d2e0c0300877aaf9ccdc7c317e6e2bf1d093a1f\" returns successfully" Nov 8 00:30:01.754255 containerd[1809]: time="2025-11-08T00:30:01.754217454Z" level=info msg="StopPodSandbox for \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\"" Nov 8 00:30:01.793899 containerd[1809]: 2025-11-08 00:30:01.772 [WARNING][6859] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"610570e2-7f08-4a1f-b974-d26709be3c92", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914", Pod:"coredns-66bc5c9577-7x794", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64cc8e0b3a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:01.793899 containerd[1809]: 2025-11-08 00:30:01.772 [INFO][6859] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Nov 8 00:30:01.793899 containerd[1809]: 2025-11-08 00:30:01.772 [INFO][6859] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" iface="eth0" netns="" Nov 8 00:30:01.793899 containerd[1809]: 2025-11-08 00:30:01.772 [INFO][6859] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Nov 8 00:30:01.793899 containerd[1809]: 2025-11-08 00:30:01.772 [INFO][6859] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Nov 8 00:30:01.793899 containerd[1809]: 2025-11-08 00:30:01.785 [INFO][6875] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" HandleID="k8s-pod-network.452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:30:01.793899 containerd[1809]: 2025-11-08 00:30:01.785 [INFO][6875] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:01.793899 containerd[1809]: 2025-11-08 00:30:01.785 [INFO][6875] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:01.793899 containerd[1809]: 2025-11-08 00:30:01.789 [WARNING][6875] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" HandleID="k8s-pod-network.452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:30:01.793899 containerd[1809]: 2025-11-08 00:30:01.789 [INFO][6875] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" HandleID="k8s-pod-network.452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:30:01.793899 containerd[1809]: 2025-11-08 00:30:01.790 [INFO][6875] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:01.793899 containerd[1809]: 2025-11-08 00:30:01.793 [INFO][6859] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Nov 8 00:30:01.794214 containerd[1809]: time="2025-11-08T00:30:01.793930476Z" level=info msg="TearDown network for sandbox \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\" successfully" Nov 8 00:30:01.794214 containerd[1809]: time="2025-11-08T00:30:01.793950917Z" level=info msg="StopPodSandbox for \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\" returns successfully" Nov 8 00:30:01.794214 containerd[1809]: time="2025-11-08T00:30:01.794204409Z" level=info msg="RemovePodSandbox for \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\"" Nov 8 00:30:01.794266 containerd[1809]: time="2025-11-08T00:30:01.794224649Z" level=info msg="Forcibly stopping sandbox \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\"" Nov 8 00:30:01.829186 containerd[1809]: 2025-11-08 00:30:01.812 [WARNING][6901] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"610570e2-7f08-4a1f-b974-d26709be3c92", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"d629c22b3073e4f7193440a3e5deaf206bb72e1c56f683f42e2c274936747914", Pod:"coredns-66bc5c9577-7x794", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64cc8e0b3a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:01.829186 containerd[1809]: 2025-11-08 00:30:01.812 [INFO][6901] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Nov 8 00:30:01.829186 containerd[1809]: 2025-11-08 00:30:01.812 [INFO][6901] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" iface="eth0" netns="" Nov 8 00:30:01.829186 containerd[1809]: 2025-11-08 00:30:01.812 [INFO][6901] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Nov 8 00:30:01.829186 containerd[1809]: 2025-11-08 00:30:01.812 [INFO][6901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Nov 8 00:30:01.829186 containerd[1809]: 2025-11-08 00:30:01.823 [INFO][6919] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" HandleID="k8s-pod-network.452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:30:01.829186 containerd[1809]: 2025-11-08 00:30:01.823 [INFO][6919] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:01.829186 containerd[1809]: 2025-11-08 00:30:01.823 [INFO][6919] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:01.829186 containerd[1809]: 2025-11-08 00:30:01.826 [WARNING][6919] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" HandleID="k8s-pod-network.452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:30:01.829186 containerd[1809]: 2025-11-08 00:30:01.826 [INFO][6919] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" HandleID="k8s-pod-network.452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--7x794-eth0" Nov 8 00:30:01.829186 containerd[1809]: 2025-11-08 00:30:01.827 [INFO][6919] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:01.829186 containerd[1809]: 2025-11-08 00:30:01.828 [INFO][6901] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b" Nov 8 00:30:01.829482 containerd[1809]: time="2025-11-08T00:30:01.829197743Z" level=info msg="TearDown network for sandbox \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\" successfully" Nov 8 00:30:01.840135 containerd[1809]: time="2025-11-08T00:30:01.840082803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:01.840135 containerd[1809]: time="2025-11-08T00:30:01.840127514Z" level=info msg="RemovePodSandbox \"452b7c47ee8b18987489bb0df5f22deb930b9f2763eefbe5261cef989344234b\" returns successfully" Nov 8 00:30:01.840506 containerd[1809]: time="2025-11-08T00:30:01.840464467Z" level=info msg="StopPodSandbox for \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\"" Nov 8 00:30:01.876130 containerd[1809]: 2025-11-08 00:30:01.858 [WARNING][6944] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-whisker--779f6bb48c--26p75-eth0" Nov 8 00:30:01.876130 containerd[1809]: 2025-11-08 00:30:01.858 [INFO][6944] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Nov 8 00:30:01.876130 containerd[1809]: 2025-11-08 00:30:01.858 [INFO][6944] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" iface="eth0" netns="" Nov 8 00:30:01.876130 containerd[1809]: 2025-11-08 00:30:01.858 [INFO][6944] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Nov 8 00:30:01.876130 containerd[1809]: 2025-11-08 00:30:01.858 [INFO][6944] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Nov 8 00:30:01.876130 containerd[1809]: 2025-11-08 00:30:01.869 [INFO][6962] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" HandleID="k8s-pod-network.2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Workload="ci--4081.3.6--n--8b27c00582-k8s-whisker--779f6bb48c--26p75-eth0" Nov 8 00:30:01.876130 containerd[1809]: 2025-11-08 00:30:01.869 [INFO][6962] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:01.876130 containerd[1809]: 2025-11-08 00:30:01.869 [INFO][6962] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:01.876130 containerd[1809]: 2025-11-08 00:30:01.873 [WARNING][6962] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" HandleID="k8s-pod-network.2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Workload="ci--4081.3.6--n--8b27c00582-k8s-whisker--779f6bb48c--26p75-eth0" Nov 8 00:30:01.876130 containerd[1809]: 2025-11-08 00:30:01.873 [INFO][6962] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" HandleID="k8s-pod-network.2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Workload="ci--4081.3.6--n--8b27c00582-k8s-whisker--779f6bb48c--26p75-eth0" Nov 8 00:30:01.876130 containerd[1809]: 2025-11-08 00:30:01.874 [INFO][6962] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:01.876130 containerd[1809]: 2025-11-08 00:30:01.875 [INFO][6944] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Nov 8 00:30:01.876410 containerd[1809]: time="2025-11-08T00:30:01.876168743Z" level=info msg="TearDown network for sandbox \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\" successfully" Nov 8 00:30:01.876410 containerd[1809]: time="2025-11-08T00:30:01.876190755Z" level=info msg="StopPodSandbox for \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\" returns successfully" Nov 8 00:30:01.876502 containerd[1809]: time="2025-11-08T00:30:01.876468146Z" level=info msg="RemovePodSandbox for \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\"" Nov 8 00:30:01.876502 containerd[1809]: time="2025-11-08T00:30:01.876492039Z" level=info msg="Forcibly stopping sandbox \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\"" Nov 8 00:30:01.919698 containerd[1809]: 2025-11-08 00:30:01.902 [WARNING][6989] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" WorkloadEndpoint="ci--4081.3.6--n--8b27c00582-k8s-whisker--779f6bb48c--26p75-eth0" Nov 8 00:30:01.919698 containerd[1809]: 2025-11-08 00:30:01.902 [INFO][6989] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Nov 8 00:30:01.919698 containerd[1809]: 2025-11-08 00:30:01.902 [INFO][6989] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" iface="eth0" netns="" Nov 8 00:30:01.919698 containerd[1809]: 2025-11-08 00:30:01.902 [INFO][6989] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Nov 8 00:30:01.919698 containerd[1809]: 2025-11-08 00:30:01.902 [INFO][6989] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Nov 8 00:30:01.919698 containerd[1809]: 2025-11-08 00:30:01.913 [INFO][7006] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" HandleID="k8s-pod-network.2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Workload="ci--4081.3.6--n--8b27c00582-k8s-whisker--779f6bb48c--26p75-eth0" Nov 8 00:30:01.919698 containerd[1809]: 2025-11-08 00:30:01.913 [INFO][7006] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:01.919698 containerd[1809]: 2025-11-08 00:30:01.913 [INFO][7006] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:01.919698 containerd[1809]: 2025-11-08 00:30:01.917 [WARNING][7006] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" HandleID="k8s-pod-network.2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Workload="ci--4081.3.6--n--8b27c00582-k8s-whisker--779f6bb48c--26p75-eth0" Nov 8 00:30:01.919698 containerd[1809]: 2025-11-08 00:30:01.917 [INFO][7006] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" HandleID="k8s-pod-network.2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Workload="ci--4081.3.6--n--8b27c00582-k8s-whisker--779f6bb48c--26p75-eth0" Nov 8 00:30:01.919698 containerd[1809]: 2025-11-08 00:30:01.918 [INFO][7006] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:01.919698 containerd[1809]: 2025-11-08 00:30:01.918 [INFO][6989] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f" Nov 8 00:30:01.919698 containerd[1809]: time="2025-11-08T00:30:01.919653966Z" level=info msg="TearDown network for sandbox \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\" successfully" Nov 8 00:30:01.920962 containerd[1809]: time="2025-11-08T00:30:01.920949132Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:01.921001 containerd[1809]: time="2025-11-08T00:30:01.920974322Z" level=info msg="RemovePodSandbox \"2c211c5a041507a069ee98f9b28c36357d97c8a0408929c92357e364a50e811f\" returns successfully" Nov 8 00:30:01.921268 containerd[1809]: time="2025-11-08T00:30:01.921254480Z" level=info msg="StopPodSandbox for \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\"" Nov 8 00:30:01.954351 containerd[1809]: 2025-11-08 00:30:01.938 [WARNING][7033] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0", GenerateName:"calico-apiserver-6694c6b5c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ec5b66b-733a-489d-9c96-c95ce9255384", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c6b5c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b", Pod:"calico-apiserver-6694c6b5c5-xb2cq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliabc02d287ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:01.954351 containerd[1809]: 2025-11-08 00:30:01.938 [INFO][7033] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Nov 8 00:30:01.954351 containerd[1809]: 2025-11-08 00:30:01.938 [INFO][7033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" iface="eth0" netns="" Nov 8 00:30:01.954351 containerd[1809]: 2025-11-08 00:30:01.938 [INFO][7033] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Nov 8 00:30:01.954351 containerd[1809]: 2025-11-08 00:30:01.938 [INFO][7033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Nov 8 00:30:01.954351 containerd[1809]: 2025-11-08 00:30:01.948 [INFO][7048] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" HandleID="k8s-pod-network.ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:30:01.954351 containerd[1809]: 2025-11-08 00:30:01.948 [INFO][7048] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:01.954351 containerd[1809]: 2025-11-08 00:30:01.948 [INFO][7048] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:01.954351 containerd[1809]: 2025-11-08 00:30:01.951 [WARNING][7048] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" HandleID="k8s-pod-network.ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:30:01.954351 containerd[1809]: 2025-11-08 00:30:01.951 [INFO][7048] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" HandleID="k8s-pod-network.ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:30:01.954351 containerd[1809]: 2025-11-08 00:30:01.952 [INFO][7048] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:01.954351 containerd[1809]: 2025-11-08 00:30:01.953 [INFO][7033] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Nov 8 00:30:01.954351 containerd[1809]: time="2025-11-08T00:30:01.954345308Z" level=info msg="TearDown network for sandbox \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\" successfully" Nov 8 00:30:01.954666 containerd[1809]: time="2025-11-08T00:30:01.954361832Z" level=info msg="StopPodSandbox for \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\" returns successfully" Nov 8 00:30:01.954666 containerd[1809]: time="2025-11-08T00:30:01.954636887Z" level=info msg="RemovePodSandbox for \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\"" Nov 8 00:30:01.954666 containerd[1809]: time="2025-11-08T00:30:01.954653009Z" level=info msg="Forcibly stopping sandbox \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\"" Nov 8 00:30:01.988287 containerd[1809]: 2025-11-08 00:30:01.971 [WARNING][7071] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0", GenerateName:"calico-apiserver-6694c6b5c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5ec5b66b-733a-489d-9c96-c95ce9255384", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c6b5c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"66ff05c749704ad8e23f0f78b920fd6959e023bcb45b83379ad588da6a8e963b", Pod:"calico-apiserver-6694c6b5c5-xb2cq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliabc02d287ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:01.988287 containerd[1809]: 2025-11-08 00:30:01.971 [INFO][7071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Nov 8 00:30:01.988287 containerd[1809]: 2025-11-08 00:30:01.971 [INFO][7071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" iface="eth0" netns="" Nov 8 00:30:01.988287 containerd[1809]: 2025-11-08 00:30:01.971 [INFO][7071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Nov 8 00:30:01.988287 containerd[1809]: 2025-11-08 00:30:01.971 [INFO][7071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Nov 8 00:30:01.988287 containerd[1809]: 2025-11-08 00:30:01.981 [INFO][7086] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" HandleID="k8s-pod-network.ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:30:01.988287 containerd[1809]: 2025-11-08 00:30:01.981 [INFO][7086] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:01.988287 containerd[1809]: 2025-11-08 00:30:01.981 [INFO][7086] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:01.988287 containerd[1809]: 2025-11-08 00:30:01.985 [WARNING][7086] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" HandleID="k8s-pod-network.ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:30:01.988287 containerd[1809]: 2025-11-08 00:30:01.985 [INFO][7086] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" HandleID="k8s-pod-network.ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--xb2cq-eth0" Nov 8 00:30:01.988287 containerd[1809]: 2025-11-08 00:30:01.986 [INFO][7086] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:01.988287 containerd[1809]: 2025-11-08 00:30:01.987 [INFO][7071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0" Nov 8 00:30:01.988287 containerd[1809]: time="2025-11-08T00:30:01.988265253Z" level=info msg="TearDown network for sandbox \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\" successfully" Nov 8 00:30:01.989747 containerd[1809]: time="2025-11-08T00:30:01.989734424Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:01.989777 containerd[1809]: time="2025-11-08T00:30:01.989759475Z" level=info msg="RemovePodSandbox \"ef872b8623653d49d38505b8506168e295830cbc68689ad159b2b789999323b0\" returns successfully" Nov 8 00:30:01.990018 containerd[1809]: time="2025-11-08T00:30:01.990005288Z" level=info msg="StopPodSandbox for \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\"" Nov 8 00:30:02.022658 containerd[1809]: 2025-11-08 00:30:02.006 [WARNING][7113] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0", GenerateName:"calico-apiserver-6694c6b5c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4457e65-0840-44a3-9b91-05cc2050df9f", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c6b5c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6", Pod:"calico-apiserver-6694c6b5c5-rk6lq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ec7e4cb991", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:02.022658 containerd[1809]: 2025-11-08 00:30:02.006 [INFO][7113] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Nov 8 00:30:02.022658 containerd[1809]: 2025-11-08 00:30:02.007 [INFO][7113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" iface="eth0" netns="" Nov 8 00:30:02.022658 containerd[1809]: 2025-11-08 00:30:02.007 [INFO][7113] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Nov 8 00:30:02.022658 containerd[1809]: 2025-11-08 00:30:02.007 [INFO][7113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Nov 8 00:30:02.022658 containerd[1809]: 2025-11-08 00:30:02.016 [INFO][7130] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" HandleID="k8s-pod-network.5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:30:02.022658 containerd[1809]: 2025-11-08 00:30:02.016 [INFO][7130] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:02.022658 containerd[1809]: 2025-11-08 00:30:02.016 [INFO][7130] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:02.022658 containerd[1809]: 2025-11-08 00:30:02.020 [WARNING][7130] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" HandleID="k8s-pod-network.5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:30:02.022658 containerd[1809]: 2025-11-08 00:30:02.020 [INFO][7130] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" HandleID="k8s-pod-network.5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:30:02.022658 containerd[1809]: 2025-11-08 00:30:02.021 [INFO][7130] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:02.022658 containerd[1809]: 2025-11-08 00:30:02.021 [INFO][7113] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Nov 8 00:30:02.022658 containerd[1809]: time="2025-11-08T00:30:02.022655289Z" level=info msg="TearDown network for sandbox \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\" successfully" Nov 8 00:30:02.022970 containerd[1809]: time="2025-11-08T00:30:02.022671352Z" level=info msg="StopPodSandbox for \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\" returns successfully" Nov 8 00:30:02.022970 containerd[1809]: time="2025-11-08T00:30:02.022920058Z" level=info msg="RemovePodSandbox for \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\"" Nov 8 00:30:02.022970 containerd[1809]: time="2025-11-08T00:30:02.022935672Z" level=info msg="Forcibly stopping sandbox \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\"" Nov 8 00:30:02.056915 containerd[1809]: 2025-11-08 00:30:02.039 [WARNING][7155] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0", GenerateName:"calico-apiserver-6694c6b5c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4457e65-0840-44a3-9b91-05cc2050df9f", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6694c6b5c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"02545e77215872a37c90a5428e2c9b15bf6abf55abb3d780722bfa9631772de6", Pod:"calico-apiserver-6694c6b5c5-rk6lq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ec7e4cb991", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:02.056915 containerd[1809]: 2025-11-08 00:30:02.039 [INFO][7155] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Nov 8 00:30:02.056915 containerd[1809]: 2025-11-08 00:30:02.039 [INFO][7155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" iface="eth0" netns="" Nov 8 00:30:02.056915 containerd[1809]: 2025-11-08 00:30:02.039 [INFO][7155] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Nov 8 00:30:02.056915 containerd[1809]: 2025-11-08 00:30:02.039 [INFO][7155] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Nov 8 00:30:02.056915 containerd[1809]: 2025-11-08 00:30:02.049 [INFO][7170] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" HandleID="k8s-pod-network.5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:30:02.056915 containerd[1809]: 2025-11-08 00:30:02.049 [INFO][7170] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:02.056915 containerd[1809]: 2025-11-08 00:30:02.050 [INFO][7170] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:02.056915 containerd[1809]: 2025-11-08 00:30:02.054 [WARNING][7170] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" HandleID="k8s-pod-network.5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:30:02.056915 containerd[1809]: 2025-11-08 00:30:02.054 [INFO][7170] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" HandleID="k8s-pod-network.5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Workload="ci--4081.3.6--n--8b27c00582-k8s-calico--apiserver--6694c6b5c5--rk6lq-eth0" Nov 8 00:30:02.056915 containerd[1809]: 2025-11-08 00:30:02.055 [INFO][7170] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:02.056915 containerd[1809]: 2025-11-08 00:30:02.056 [INFO][7155] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8" Nov 8 00:30:02.057232 containerd[1809]: time="2025-11-08T00:30:02.056939889Z" level=info msg="TearDown network for sandbox \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\" successfully" Nov 8 00:30:02.058306 containerd[1809]: time="2025-11-08T00:30:02.058294835Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:02.058333 containerd[1809]: time="2025-11-08T00:30:02.058319392Z" level=info msg="RemovePodSandbox \"5d2c666973f1cd3af82c0c4edc12dd9ef3d1b298d7209eef711d046dece5d9e8\" returns successfully" Nov 8 00:30:02.058579 containerd[1809]: time="2025-11-08T00:30:02.058569408Z" level=info msg="StopPodSandbox for \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\"" Nov 8 00:30:02.093508 containerd[1809]: 2025-11-08 00:30:02.076 [WARNING][7193] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d510fe8b-db97-40db-ab28-3634909f38a6", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0", Pod:"goldmane-7c778bb748-t42z5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8fb6fbf2e59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:02.093508 containerd[1809]: 2025-11-08 00:30:02.076 [INFO][7193] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Nov 8 00:30:02.093508 containerd[1809]: 2025-11-08 00:30:02.076 [INFO][7193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" iface="eth0" netns="" Nov 8 00:30:02.093508 containerd[1809]: 2025-11-08 00:30:02.076 [INFO][7193] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Nov 8 00:30:02.093508 containerd[1809]: 2025-11-08 00:30:02.076 [INFO][7193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Nov 8 00:30:02.093508 containerd[1809]: 2025-11-08 00:30:02.086 [INFO][7207] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" HandleID="k8s-pod-network.956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Workload="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:30:02.093508 containerd[1809]: 2025-11-08 00:30:02.086 [INFO][7207] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:02.093508 containerd[1809]: 2025-11-08 00:30:02.086 [INFO][7207] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:02.093508 containerd[1809]: 2025-11-08 00:30:02.090 [WARNING][7207] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" HandleID="k8s-pod-network.956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Workload="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:30:02.093508 containerd[1809]: 2025-11-08 00:30:02.090 [INFO][7207] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" HandleID="k8s-pod-network.956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Workload="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:30:02.093508 containerd[1809]: 2025-11-08 00:30:02.092 [INFO][7207] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:02.093508 containerd[1809]: 2025-11-08 00:30:02.092 [INFO][7193] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Nov 8 00:30:02.093801 containerd[1809]: time="2025-11-08T00:30:02.093527622Z" level=info msg="TearDown network for sandbox \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\" successfully" Nov 8 00:30:02.093801 containerd[1809]: time="2025-11-08T00:30:02.093544459Z" level=info msg="StopPodSandbox for \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\" returns successfully" Nov 8 00:30:02.093801 containerd[1809]: time="2025-11-08T00:30:02.093788982Z" level=info msg="RemovePodSandbox for \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\"" Nov 8 00:30:02.093856 containerd[1809]: time="2025-11-08T00:30:02.093805220Z" level=info msg="Forcibly stopping sandbox \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\"" Nov 8 00:30:02.127618 containerd[1809]: 2025-11-08 00:30:02.110 [WARNING][7231] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"d510fe8b-db97-40db-ab28-3634909f38a6", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"d44e04dc6cf5612f858289523246ba819d35c98ff5b5ca6df73cf6d8f87dddd0", Pod:"goldmane-7c778bb748-t42z5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8fb6fbf2e59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:02.127618 containerd[1809]: 2025-11-08 00:30:02.110 [INFO][7231] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Nov 8 00:30:02.127618 containerd[1809]: 2025-11-08 00:30:02.110 [INFO][7231] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" iface="eth0" netns="" Nov 8 00:30:02.127618 containerd[1809]: 2025-11-08 00:30:02.110 [INFO][7231] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Nov 8 00:30:02.127618 containerd[1809]: 2025-11-08 00:30:02.110 [INFO][7231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Nov 8 00:30:02.127618 containerd[1809]: 2025-11-08 00:30:02.120 [INFO][7247] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" HandleID="k8s-pod-network.956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Workload="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:30:02.127618 containerd[1809]: 2025-11-08 00:30:02.120 [INFO][7247] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:02.127618 containerd[1809]: 2025-11-08 00:30:02.120 [INFO][7247] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:02.127618 containerd[1809]: 2025-11-08 00:30:02.125 [WARNING][7247] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" HandleID="k8s-pod-network.956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Workload="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:30:02.127618 containerd[1809]: 2025-11-08 00:30:02.125 [INFO][7247] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" HandleID="k8s-pod-network.956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Workload="ci--4081.3.6--n--8b27c00582-k8s-goldmane--7c778bb748--t42z5-eth0" Nov 8 00:30:02.127618 containerd[1809]: 2025-11-08 00:30:02.126 [INFO][7247] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:02.127618 containerd[1809]: 2025-11-08 00:30:02.126 [INFO][7231] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04" Nov 8 00:30:02.127914 containerd[1809]: time="2025-11-08T00:30:02.127632013Z" level=info msg="TearDown network for sandbox \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\" successfully" Nov 8 00:30:02.129087 containerd[1809]: time="2025-11-08T00:30:02.129050271Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:02.129087 containerd[1809]: time="2025-11-08T00:30:02.129075704Z" level=info msg="RemovePodSandbox \"956136773a8ca38016f4d3d78d50fe4d71c4cd4357a05bfe500861e240225d04\" returns successfully" Nov 8 00:30:02.129372 containerd[1809]: time="2025-11-08T00:30:02.129329731Z" level=info msg="StopPodSandbox for \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\"" Nov 8 00:30:02.164389 containerd[1809]: 2025-11-08 00:30:02.147 [WARNING][7272] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"24a3af85-008b-4d6d-85c6-e1f4e122242a", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047", Pod:"coredns-66bc5c9577-nl8rk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid20bea3e92b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:02.164389 containerd[1809]: 2025-11-08 00:30:02.147 [INFO][7272] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Nov 8 00:30:02.164389 containerd[1809]: 2025-11-08 00:30:02.147 [INFO][7272] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" iface="eth0" netns="" Nov 8 00:30:02.164389 containerd[1809]: 2025-11-08 00:30:02.147 [INFO][7272] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Nov 8 00:30:02.164389 containerd[1809]: 2025-11-08 00:30:02.147 [INFO][7272] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Nov 8 00:30:02.164389 containerd[1809]: 2025-11-08 00:30:02.158 [INFO][7288] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" HandleID="k8s-pod-network.225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:30:02.164389 containerd[1809]: 2025-11-08 00:30:02.158 [INFO][7288] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:02.164389 containerd[1809]: 2025-11-08 00:30:02.158 [INFO][7288] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:02.164389 containerd[1809]: 2025-11-08 00:30:02.161 [WARNING][7288] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" HandleID="k8s-pod-network.225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:30:02.164389 containerd[1809]: 2025-11-08 00:30:02.161 [INFO][7288] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" HandleID="k8s-pod-network.225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:30:02.164389 containerd[1809]: 2025-11-08 00:30:02.162 [INFO][7288] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:02.164389 containerd[1809]: 2025-11-08 00:30:02.163 [INFO][7272] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Nov 8 00:30:02.164389 containerd[1809]: time="2025-11-08T00:30:02.164376806Z" level=info msg="TearDown network for sandbox \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\" successfully" Nov 8 00:30:02.164389 containerd[1809]: time="2025-11-08T00:30:02.164393014Z" level=info msg="StopPodSandbox for \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\" returns successfully" Nov 8 00:30:02.164734 containerd[1809]: time="2025-11-08T00:30:02.164674590Z" level=info msg="RemovePodSandbox for \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\"" Nov 8 00:30:02.164734 containerd[1809]: time="2025-11-08T00:30:02.164690468Z" level=info msg="Forcibly stopping sandbox \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\"" Nov 8 00:30:02.201166 containerd[1809]: 2025-11-08 00:30:02.183 [WARNING][7311] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"24a3af85-008b-4d6d-85c6-e1f4e122242a", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"707d121cc69ec3792e0d3fcd9b7dd9dcc1cbe22a73be5da175f7bdbb1c3fe047", Pod:"coredns-66bc5c9577-nl8rk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid20bea3e92b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:02.201166 containerd[1809]: 2025-11-08 00:30:02.183 [INFO][7311] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Nov 8 00:30:02.201166 containerd[1809]: 2025-11-08 00:30:02.183 [INFO][7311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" iface="eth0" netns="" Nov 8 00:30:02.201166 containerd[1809]: 2025-11-08 00:30:02.183 [INFO][7311] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Nov 8 00:30:02.201166 containerd[1809]: 2025-11-08 00:30:02.183 [INFO][7311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Nov 8 00:30:02.201166 containerd[1809]: 2025-11-08 00:30:02.194 [INFO][7328] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" HandleID="k8s-pod-network.225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:30:02.201166 containerd[1809]: 2025-11-08 00:30:02.194 [INFO][7328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:02.201166 containerd[1809]: 2025-11-08 00:30:02.194 [INFO][7328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:02.201166 containerd[1809]: 2025-11-08 00:30:02.198 [WARNING][7328] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" HandleID="k8s-pod-network.225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:30:02.201166 containerd[1809]: 2025-11-08 00:30:02.198 [INFO][7328] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" HandleID="k8s-pod-network.225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Workload="ci--4081.3.6--n--8b27c00582-k8s-coredns--66bc5c9577--nl8rk-eth0" Nov 8 00:30:02.201166 containerd[1809]: 2025-11-08 00:30:02.199 [INFO][7328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:02.201166 containerd[1809]: 2025-11-08 00:30:02.200 [INFO][7311] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9" Nov 8 00:30:02.201494 containerd[1809]: time="2025-11-08T00:30:02.201191491Z" level=info msg="TearDown network for sandbox \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\" successfully" Nov 8 00:30:02.202574 containerd[1809]: time="2025-11-08T00:30:02.202561866Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:02.202601 containerd[1809]: time="2025-11-08T00:30:02.202587229Z" level=info msg="RemovePodSandbox \"225e4a33c75d05b84dcf2157f736fe64ac0189c9249c9ebd95f8243c83ab63f9\" returns successfully" Nov 8 00:30:02.202827 containerd[1809]: time="2025-11-08T00:30:02.202816168Z" level=info msg="StopPodSandbox for \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\"" Nov 8 00:30:02.237203 containerd[1809]: 2025-11-08 00:30:02.220 [WARNING][7353] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2db14322-3de3-476c-bc43-59b2bd1acea4", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec", Pod:"csi-node-driver-njlbj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali85214257e4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:02.237203 containerd[1809]: 2025-11-08 00:30:02.220 [INFO][7353] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Nov 8 00:30:02.237203 containerd[1809]: 2025-11-08 00:30:02.220 [INFO][7353] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" iface="eth0" netns="" Nov 8 00:30:02.237203 containerd[1809]: 2025-11-08 00:30:02.220 [INFO][7353] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Nov 8 00:30:02.237203 containerd[1809]: 2025-11-08 00:30:02.220 [INFO][7353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Nov 8 00:30:02.237203 containerd[1809]: 2025-11-08 00:30:02.230 [INFO][7370] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" HandleID="k8s-pod-network.e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Workload="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:30:02.237203 containerd[1809]: 2025-11-08 00:30:02.230 [INFO][7370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:02.237203 containerd[1809]: 2025-11-08 00:30:02.230 [INFO][7370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:02.237203 containerd[1809]: 2025-11-08 00:30:02.234 [WARNING][7370] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" HandleID="k8s-pod-network.e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Workload="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:30:02.237203 containerd[1809]: 2025-11-08 00:30:02.234 [INFO][7370] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" HandleID="k8s-pod-network.e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Workload="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:30:02.237203 containerd[1809]: 2025-11-08 00:30:02.235 [INFO][7370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:02.237203 containerd[1809]: 2025-11-08 00:30:02.236 [INFO][7353] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Nov 8 00:30:02.237203 containerd[1809]: time="2025-11-08T00:30:02.237190607Z" level=info msg="TearDown network for sandbox \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\" successfully" Nov 8 00:30:02.237203 containerd[1809]: time="2025-11-08T00:30:02.237206252Z" level=info msg="StopPodSandbox for \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\" returns successfully" Nov 8 00:30:02.237520 containerd[1809]: time="2025-11-08T00:30:02.237481597Z" level=info msg="RemovePodSandbox for \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\"" Nov 8 00:30:02.237520 containerd[1809]: time="2025-11-08T00:30:02.237500284Z" level=info msg="Forcibly stopping sandbox \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\"" Nov 8 00:30:02.272277 containerd[1809]: 2025-11-08 00:30:02.255 [WARNING][7396] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2db14322-3de3-476c-bc43-59b2bd1acea4", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 29, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-8b27c00582", ContainerID:"8eade313231ba5e6702b9ea2f66c0430d057a80a458ffce2eaffd35bb6f44aec", Pod:"csi-node-driver-njlbj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali85214257e4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:30:02.272277 containerd[1809]: 2025-11-08 00:30:02.255 [INFO][7396] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Nov 8 00:30:02.272277 containerd[1809]: 2025-11-08 00:30:02.255 [INFO][7396] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" iface="eth0" netns="" Nov 8 00:30:02.272277 containerd[1809]: 2025-11-08 00:30:02.255 [INFO][7396] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Nov 8 00:30:02.272277 containerd[1809]: 2025-11-08 00:30:02.255 [INFO][7396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Nov 8 00:30:02.272277 containerd[1809]: 2025-11-08 00:30:02.265 [INFO][7411] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" HandleID="k8s-pod-network.e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Workload="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:30:02.272277 containerd[1809]: 2025-11-08 00:30:02.265 [INFO][7411] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:30:02.272277 containerd[1809]: 2025-11-08 00:30:02.265 [INFO][7411] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:30:02.272277 containerd[1809]: 2025-11-08 00:30:02.269 [WARNING][7411] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" HandleID="k8s-pod-network.e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Workload="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:30:02.272277 containerd[1809]: 2025-11-08 00:30:02.269 [INFO][7411] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" HandleID="k8s-pod-network.e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Workload="ci--4081.3.6--n--8b27c00582-k8s-csi--node--driver--njlbj-eth0" Nov 8 00:30:02.272277 containerd[1809]: 2025-11-08 00:30:02.270 [INFO][7411] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:30:02.272277 containerd[1809]: 2025-11-08 00:30:02.271 [INFO][7396] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18" Nov 8 00:30:02.272562 containerd[1809]: time="2025-11-08T00:30:02.272277148Z" level=info msg="TearDown network for sandbox \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\" successfully" Nov 8 00:30:02.273585 containerd[1809]: time="2025-11-08T00:30:02.273544980Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:30:02.273585 containerd[1809]: time="2025-11-08T00:30:02.273569910Z" level=info msg="RemovePodSandbox \"e8587bcf079c40fb0f00b387a760fcc1fbafa51b762d195010a43b4403aa3d18\" returns successfully" Nov 8 00:30:08.677050 kubelet[3070]: E1108 00:30:08.675826 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:30:10.673179 kubelet[3070]: E1108 00:30:10.673102 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:30:10.673957 kubelet[3070]: E1108 00:30:10.673864 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:30:11.675185 kubelet[3070]: E1108 00:30:11.675012 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:30:12.672479 kubelet[3070]: E1108 00:30:12.672451 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:30:13.672832 containerd[1809]: time="2025-11-08T00:30:13.672766565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:30:14.055607 containerd[1809]: time="2025-11-08T00:30:14.055478088Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:14.066430 containerd[1809]: time="2025-11-08T00:30:14.066377163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:30:14.066475 containerd[1809]: time="2025-11-08T00:30:14.066429223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:30:14.066616 kubelet[3070]: E1108 00:30:14.066558 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:30:14.066616 kubelet[3070]: E1108 00:30:14.066592 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:30:14.066863 kubelet[3070]: E1108 00:30:14.066654 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8665b9889f-q5txb_calico-system(225a8bd8-1a26-4c77-ba47-4755836593e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:14.067178 containerd[1809]: time="2025-11-08T00:30:14.067132697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:30:14.435199 containerd[1809]: time="2025-11-08T00:30:14.435094743Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:14.435646 containerd[1809]: time="2025-11-08T00:30:14.435561598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:30:14.435646 containerd[1809]: time="2025-11-08T00:30:14.435626969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:30:14.435826 kubelet[3070]: E1108 00:30:14.435757 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:30:14.435826 kubelet[3070]: E1108 00:30:14.435802 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:30:14.435883 kubelet[3070]: E1108 00:30:14.435843 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8665b9889f-q5txb_calico-system(225a8bd8-1a26-4c77-ba47-4755836593e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:14.435883 kubelet[3070]: E1108 00:30:14.435869 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:30:20.672630 containerd[1809]: time="2025-11-08T00:30:20.672605877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:30:21.047010 containerd[1809]: time="2025-11-08T00:30:21.046981976Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:21.047390 containerd[1809]: time="2025-11-08T00:30:21.047342839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:30:21.047442 containerd[1809]: time="2025-11-08T00:30:21.047388115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:30:21.047589 kubelet[3070]: E1108 00:30:21.047530 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:30:21.047848 kubelet[3070]: E1108 00:30:21.047590 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:30:21.047848 kubelet[3070]: E1108 00:30:21.047702 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c8d496dff-jlg6z_calico-system(7c46dfff-678e-44bc-9089-cef43e8fa0d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:21.047848 kubelet[3070]: E1108 00:30:21.047725 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:30:22.674477 containerd[1809]: time="2025-11-08T00:30:22.674381007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:30:23.080510 containerd[1809]: time="2025-11-08T00:30:23.080417885Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:23.081345 containerd[1809]: time="2025-11-08T00:30:23.081317528Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:30:23.081411 containerd[1809]: time="2025-11-08T00:30:23.081389388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:30:23.081553 kubelet[3070]: E1108 00:30:23.081482 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:23.081553 kubelet[3070]: E1108 00:30:23.081512 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:23.081767 kubelet[3070]: E1108 00:30:23.081635 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6694c6b5c5-xb2cq_calico-apiserver(5ec5b66b-733a-489d-9c96-c95ce9255384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:23.081767 kubelet[3070]: E1108 00:30:23.081660 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:30:23.081853 containerd[1809]: time="2025-11-08T00:30:23.081729775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:30:23.509914 containerd[1809]: time="2025-11-08T00:30:23.509846431Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:23.510708 containerd[1809]: time="2025-11-08T00:30:23.510683822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:30:23.510793 containerd[1809]: time="2025-11-08T00:30:23.510775640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:30:23.510876 kubelet[3070]: E1108 00:30:23.510853 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:30:23.510921 kubelet[3070]: E1108 00:30:23.510884 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:30:23.510950 kubelet[3070]: E1108 00:30:23.510931 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-njlbj_calico-system(2db14322-3de3-476c-bc43-59b2bd1acea4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:23.511470 containerd[1809]: time="2025-11-08T00:30:23.511459233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:30:23.889795 containerd[1809]: time="2025-11-08T00:30:23.889685920Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:23.890098 containerd[1809]: time="2025-11-08T00:30:23.890075149Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:30:23.890171 containerd[1809]: time="2025-11-08T00:30:23.890121966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:30:23.890335 kubelet[3070]: E1108 00:30:23.890287 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:30:23.890335 kubelet[3070]: E1108 00:30:23.890327 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:30:23.890396 kubelet[3070]: E1108 00:30:23.890374 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-njlbj_calico-system(2db14322-3de3-476c-bc43-59b2bd1acea4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:23.890433 kubelet[3070]: E1108 00:30:23.890401 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:30:24.674960 containerd[1809]: time="2025-11-08T00:30:24.674863062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:30:25.055026 containerd[1809]: time="2025-11-08T00:30:25.054910092Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:25.055681 containerd[1809]: time="2025-11-08T00:30:25.055611682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:30:25.055750 containerd[1809]: time="2025-11-08T00:30:25.055681365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:30:25.055810 kubelet[3070]: E1108 00:30:25.055780 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:30:25.055934 kubelet[3070]: E1108 00:30:25.055816 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:30:25.056016 kubelet[3070]: E1108 00:30:25.055958 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t42z5_calico-system(d510fe8b-db97-40db-ab28-3634909f38a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:25.056016 kubelet[3070]: E1108 00:30:25.055983 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:30:25.056056 containerd[1809]: time="2025-11-08T00:30:25.056043974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:30:25.456086 containerd[1809]: time="2025-11-08T00:30:25.456063487Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:30:25.456643 containerd[1809]: time="2025-11-08T00:30:25.456598256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:30:25.456679 containerd[1809]: time="2025-11-08T00:30:25.456642145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:30:25.456750 kubelet[3070]: E1108 00:30:25.456730 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:25.456790 kubelet[3070]: E1108 00:30:25.456759 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:30:25.456837 kubelet[3070]: E1108 00:30:25.456825 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6694c6b5c5-rk6lq_calico-apiserver(a4457e65-0840-44a3-9b91-05cc2050df9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:30:25.456875 kubelet[3070]: E1108 00:30:25.456852 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:30:29.673040 kubelet[3070]: E1108 00:30:29.672995 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:30:35.672268 kubelet[3070]: E1108 00:30:35.672222 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:30:38.671796 kubelet[3070]: E1108 00:30:38.671743 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:30:38.672207 kubelet[3070]: E1108 00:30:38.672118 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:30:39.673524 kubelet[3070]: E1108 00:30:39.673430 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:30:39.673524 kubelet[3070]: E1108 00:30:39.673494 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:30:40.674930 kubelet[3070]: E1108 00:30:40.674758 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:30:49.672075 kubelet[3070]: E1108 00:30:49.672049 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:30:51.675876 kubelet[3070]: E1108 00:30:51.675796 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:30:51.677133 kubelet[3070]: E1108 00:30:51.677037 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:30:52.672832 kubelet[3070]: E1108 00:30:52.672791 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:30:53.672279 kubelet[3070]: E1108 00:30:53.672228 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:30:54.672436 kubelet[3070]: E1108 00:30:54.672370 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:31:03.675015 containerd[1809]: time="2025-11-08T00:31:03.674887408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:31:04.017684 containerd[1809]: time="2025-11-08T00:31:04.017621876Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:04.018167 containerd[1809]: time="2025-11-08T00:31:04.018113321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:31:04.018205 containerd[1809]: time="2025-11-08T00:31:04.018172573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:31:04.018356 kubelet[3070]: E1108 00:31:04.018289 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:04.018356 kubelet[3070]: E1108 00:31:04.018319 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:04.018603 kubelet[3070]: E1108 00:31:04.018364 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-njlbj_calico-system(2db14322-3de3-476c-bc43-59b2bd1acea4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:04.018866 containerd[1809]: time="2025-11-08T00:31:04.018830318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:31:04.361304 containerd[1809]: time="2025-11-08T00:31:04.361039006Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:04.375910 containerd[1809]: time="2025-11-08T00:31:04.375843014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:31:04.375910 containerd[1809]: time="2025-11-08T00:31:04.375875498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:31:04.376002 kubelet[3070]: E1108 00:31:04.375978 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:04.376049 kubelet[3070]: E1108 00:31:04.376006 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:04.376075 kubelet[3070]: E1108 00:31:04.376054 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-njlbj_calico-system(2db14322-3de3-476c-bc43-59b2bd1acea4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:04.376130 kubelet[3070]: E1108 00:31:04.376086 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:31:04.674467 containerd[1809]: time="2025-11-08T00:31:04.674243574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:31:05.027851 containerd[1809]: time="2025-11-08T00:31:05.027759226Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:05.028695 containerd[1809]: time="2025-11-08T00:31:05.028668087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:31:05.028759 containerd[1809]: time="2025-11-08T00:31:05.028736567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:05.028837 kubelet[3070]: E1108 00:31:05.028819 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:05.029017 kubelet[3070]: E1108 00:31:05.028843 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:05.029017 kubelet[3070]: E1108 00:31:05.028882 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c8d496dff-jlg6z_calico-system(7c46dfff-678e-44bc-9089-cef43e8fa0d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:05.029017 kubelet[3070]: E1108 00:31:05.028900 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:31:05.672100 containerd[1809]: time="2025-11-08T00:31:05.672079129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:06.031233 containerd[1809]: time="2025-11-08T00:31:06.031115586Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:06.032110 containerd[1809]: time="2025-11-08T00:31:06.032084337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:06.032200 containerd[1809]: time="2025-11-08T00:31:06.032179732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:06.032335 kubelet[3070]: E1108 00:31:06.032278 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:06.032335 kubelet[3070]: E1108 00:31:06.032312 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:06.032583 kubelet[3070]: E1108 00:31:06.032473 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6694c6b5c5-rk6lq_calico-apiserver(a4457e65-0840-44a3-9b91-05cc2050df9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:06.032583 kubelet[3070]: E1108 00:31:06.032506 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:31:06.032657 containerd[1809]: time="2025-11-08T00:31:06.032564755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:06.375896 containerd[1809]: time="2025-11-08T00:31:06.375630754Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:06.376582 containerd[1809]: time="2025-11-08T00:31:06.376554925Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:06.376667 containerd[1809]: time="2025-11-08T00:31:06.376623568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:06.376851 kubelet[3070]: E1108 00:31:06.376784 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:06.376851 kubelet[3070]: E1108 00:31:06.376824 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:06.376925 kubelet[3070]: E1108 00:31:06.376890 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6694c6b5c5-xb2cq_calico-apiserver(5ec5b66b-733a-489d-9c96-c95ce9255384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:06.376925 kubelet[3070]: E1108 00:31:06.376910 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:31:06.672213 containerd[1809]: time="2025-11-08T00:31:06.672121588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:31:07.038581 containerd[1809]: time="2025-11-08T00:31:07.038528197Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:07.039147 containerd[1809]: time="2025-11-08T00:31:07.039087693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:31:07.039205 containerd[1809]: time="2025-11-08T00:31:07.039157707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:07.039334 kubelet[3070]: E1108 00:31:07.039283 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:07.039334 kubelet[3070]: E1108 00:31:07.039313 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:07.039522 kubelet[3070]: E1108 00:31:07.039358 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t42z5_calico-system(d510fe8b-db97-40db-ab28-3634909f38a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:07.039522 kubelet[3070]: E1108 00:31:07.039378 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:31:07.674825 containerd[1809]: time="2025-11-08T00:31:07.674724031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:31:08.032329 containerd[1809]: time="2025-11-08T00:31:08.032194704Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:08.048516 containerd[1809]: time="2025-11-08T00:31:08.048448416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:31:08.048516 containerd[1809]: time="2025-11-08T00:31:08.048489514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:31:08.048840 kubelet[3070]: E1108 00:31:08.048629 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:08.048840 kubelet[3070]: E1108 00:31:08.048673 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:08.048840 kubelet[3070]: E1108 00:31:08.048741 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8665b9889f-q5txb_calico-system(225a8bd8-1a26-4c77-ba47-4755836593e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:08.049352 containerd[1809]: time="2025-11-08T00:31:08.049331176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:31:08.398521 containerd[1809]: time="2025-11-08T00:31:08.398296890Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:08.399145 containerd[1809]: time="2025-11-08T00:31:08.399056971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:31:08.399182 containerd[1809]: time="2025-11-08T00:31:08.399131722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:08.399293 kubelet[3070]: E1108 00:31:08.399223 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:08.399293 kubelet[3070]: E1108 00:31:08.399271 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:08.399357 kubelet[3070]: E1108 00:31:08.399326 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8665b9889f-q5txb_calico-system(225a8bd8-1a26-4c77-ba47-4755836593e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:08.399379 kubelet[3070]: E1108 00:31:08.399351 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:31:17.672915 kubelet[3070]: E1108 00:31:17.672857 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:31:18.671583 kubelet[3070]: E1108 00:31:18.671558 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:31:18.671807 kubelet[3070]: E1108 00:31:18.671782 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:31:21.672116 kubelet[3070]: E1108 00:31:21.672094 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:31:21.672116 kubelet[3070]: E1108 00:31:21.672109 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:31:23.674855 kubelet[3070]: E1108 00:31:23.674742 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:31:29.672219 kubelet[3070]: E1108 00:31:29.672192 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:31:29.672705 kubelet[3070]: E1108 00:31:29.672449 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:31:30.671821 kubelet[3070]: E1108 00:31:30.671777 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:31:34.673092 kubelet[3070]: E1108 00:31:34.673050 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:31:36.673557 kubelet[3070]: E1108 00:31:36.673457 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:31:36.674675 kubelet[3070]: E1108 00:31:36.673576 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:31:40.671987 kubelet[3070]: E1108 00:31:40.671926 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:31:41.673050 kubelet[3070]: E1108 00:31:41.673008 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:31:42.673657 kubelet[3070]: E1108 00:31:42.673577 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:31:47.675951 kubelet[3070]: E1108 00:31:47.675837 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:31:48.671642 kubelet[3070]: E1108 00:31:48.671589 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:31:48.671642 kubelet[3070]: E1108 00:31:48.671587 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:31:53.672322 kubelet[3070]: E1108 00:31:53.672295 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:31:55.674292 kubelet[3070]: E1108 00:31:55.674086 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:31:56.672730 kubelet[3070]: E1108 00:31:56.672695 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:31:58.672918 kubelet[3070]: E1108 00:31:58.672883 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:32:00.674127 kubelet[3070]: E1108 00:32:00.674029 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:32:03.673844 kubelet[3070]: E1108 00:32:03.673744 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:32:04.672420 kubelet[3070]: E1108 00:32:04.672393 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:32:07.682313 kubelet[3070]: E1108 00:32:07.682198 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:32:09.673566 kubelet[3070]: E1108 00:32:09.673497 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:32:09.673816 kubelet[3070]: E1108 00:32:09.673720 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:32:15.674007 kubelet[3070]: E1108 00:32:15.673864 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:32:15.674007 kubelet[3070]: E1108 00:32:15.673869 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:32:17.672515 kubelet[3070]: E1108 00:32:17.672491 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:32:20.672815 kubelet[3070]: E1108 00:32:20.672790 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:32:21.672954 kubelet[3070]: E1108 00:32:21.672906 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:32:22.673490 kubelet[3070]: E1108 00:32:22.673413 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:32:29.672793 containerd[1809]: time="2025-11-08T00:32:29.672758758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:32:30.089005 containerd[1809]: time="2025-11-08T00:32:30.088884558Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:30.089791 containerd[1809]: time="2025-11-08T00:32:30.089728593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:32:30.089828 containerd[1809]: time="2025-11-08T00:32:30.089781709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:30.089954 kubelet[3070]: E1108 00:32:30.089900 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:30.089954 kubelet[3070]: E1108 00:32:30.089930 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:30.090166 kubelet[3070]: E1108 00:32:30.090040 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6694c6b5c5-rk6lq_calico-apiserver(a4457e65-0840-44a3-9b91-05cc2050df9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:30.090166 kubelet[3070]: E1108 00:32:30.090067 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:32:30.090229 containerd[1809]: time="2025-11-08T00:32:30.090123083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:32:30.458984 containerd[1809]: time="2025-11-08T00:32:30.458922779Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:30.459477 containerd[1809]: time="2025-11-08T00:32:30.459426257Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:32:30.459516 containerd[1809]: time="2025-11-08T00:32:30.459481869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:30.459645 kubelet[3070]: E1108 00:32:30.459590 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:32:30.459645 kubelet[3070]: E1108 00:32:30.459622 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:32:30.459775 kubelet[3070]: E1108 00:32:30.459734 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t42z5_calico-system(d510fe8b-db97-40db-ab28-3634909f38a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:30.459775 kubelet[3070]: E1108 00:32:30.459756 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:32:30.459956 containerd[1809]: time="2025-11-08T00:32:30.459919648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:32:30.826373 containerd[1809]: time="2025-11-08T00:32:30.826096200Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:30.827073 containerd[1809]: time="2025-11-08T00:32:30.827050510Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:32:30.827162 containerd[1809]: time="2025-11-08T00:32:30.827118299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:30.827315 kubelet[3070]: E1108 00:32:30.827290 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:30.827346 kubelet[3070]: E1108 00:32:30.827324 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:30.827379 kubelet[3070]: E1108 00:32:30.827370 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6694c6b5c5-xb2cq_calico-apiserver(5ec5b66b-733a-489d-9c96-c95ce9255384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:30.827409 kubelet[3070]: E1108 00:32:30.827390 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:32:33.672578 containerd[1809]: time="2025-11-08T00:32:33.672556342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:32:34.037967 containerd[1809]: time="2025-11-08T00:32:34.037838550Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:34.038903 containerd[1809]: time="2025-11-08T00:32:34.038834030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:32:34.038943 containerd[1809]: time="2025-11-08T00:32:34.038890512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:32:34.039011 kubelet[3070]: E1108 00:32:34.038988 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:32:34.039198 kubelet[3070]: E1108 00:32:34.039018 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:32:34.039198 kubelet[3070]: E1108 00:32:34.039066 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8665b9889f-q5txb_calico-system(225a8bd8-1a26-4c77-ba47-4755836593e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:34.039530 containerd[1809]: time="2025-11-08T00:32:34.039481518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:32:34.411786 containerd[1809]: time="2025-11-08T00:32:34.411516282Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:34.423603 containerd[1809]: time="2025-11-08T00:32:34.423554593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:32:34.423648 containerd[1809]: time="2025-11-08T00:32:34.423594802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:32:34.423741 kubelet[3070]: E1108 00:32:34.423715 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:32:34.423781 kubelet[3070]: E1108 00:32:34.423745 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:32:34.423805 kubelet[3070]: E1108 00:32:34.423796 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8665b9889f-q5txb_calico-system(225a8bd8-1a26-4c77-ba47-4755836593e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:34.423890 kubelet[3070]: E1108 00:32:34.423821 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:32:35.674033 containerd[1809]: time="2025-11-08T00:32:35.673952420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:32:36.096345 containerd[1809]: time="2025-11-08T00:32:36.096199824Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:36.097464 containerd[1809]: time="2025-11-08T00:32:36.097393566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:32:36.097512 containerd[1809]: time="2025-11-08T00:32:36.097458556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:32:36.097618 kubelet[3070]: E1108 00:32:36.097572 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:32:36.097618 kubelet[3070]: E1108 00:32:36.097599 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:32:36.097824 kubelet[3070]: E1108 00:32:36.097648 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c8d496dff-jlg6z_calico-system(7c46dfff-678e-44bc-9089-cef43e8fa0d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:36.097824 kubelet[3070]: E1108 00:32:36.097669 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:32:37.674458 containerd[1809]: time="2025-11-08T00:32:37.674354576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:32:38.059554 containerd[1809]: time="2025-11-08T00:32:38.059439836Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:38.071673 containerd[1809]: time="2025-11-08T00:32:38.071597254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:32:38.071673 containerd[1809]: time="2025-11-08T00:32:38.071655399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:32:38.071806 kubelet[3070]: E1108 00:32:38.071741 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:32:38.071806 kubelet[3070]: E1108 00:32:38.071766 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:32:38.071990 kubelet[3070]: E1108 00:32:38.071806 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-njlbj_calico-system(2db14322-3de3-476c-bc43-59b2bd1acea4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:38.072286 containerd[1809]: time="2025-11-08T00:32:38.072241325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:32:38.438634 containerd[1809]: time="2025-11-08T00:32:38.438606002Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:38.455587 containerd[1809]: time="2025-11-08T00:32:38.455511609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:32:38.455587 containerd[1809]: time="2025-11-08T00:32:38.455541132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:32:38.455673 kubelet[3070]: E1108 00:32:38.455650 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:32:38.455704 kubelet[3070]: E1108 00:32:38.455680 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:32:38.455792 kubelet[3070]: E1108 00:32:38.455752 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-njlbj_calico-system(2db14322-3de3-476c-bc43-59b2bd1acea4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:38.455792 kubelet[3070]: E1108 00:32:38.455777 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:32:41.673771 kubelet[3070]: E1108 00:32:41.673744 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:32:41.673771 kubelet[3070]: E1108 00:32:41.673763 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:32:41.674100 kubelet[3070]: E1108 00:32:41.673904 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:32:46.673604 kubelet[3070]: E1108 00:32:46.673532 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:32:49.673348 kubelet[3070]: E1108 00:32:49.673324 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:32:51.680408 kubelet[3070]: E1108 00:32:51.680331 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:32:54.673045 kubelet[3070]: E1108 00:32:54.672963 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:32:54.673045 kubelet[3070]: E1108 00:32:54.672970 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:32:56.674134 kubelet[3070]: E1108 00:32:56.673939 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:33:01.675393 kubelet[3070]: E1108 00:33:01.675349 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:33:03.675310 kubelet[3070]: E1108 00:33:03.675166 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:33:05.672821 kubelet[3070]: E1108 00:33:05.672790 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:33:08.672477 kubelet[3070]: E1108 00:33:08.672436 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:33:09.672932 kubelet[3070]: E1108 00:33:09.672883 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:33:11.675480 kubelet[3070]: E1108 00:33:11.675397 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:33:14.674798 kubelet[3070]: E1108 00:33:14.674698 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:33:17.672252 kubelet[3070]: E1108 00:33:17.672191 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:33:18.675018 kubelet[3070]: E1108 00:33:18.674914 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:33:19.672539 kubelet[3070]: E1108 00:33:19.672510 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:33:23.672428 kubelet[3070]: E1108 00:33:23.672358 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:33:24.671950 kubelet[3070]: E1108 00:33:24.671927 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:33:26.673089 kubelet[3070]: E1108 00:33:26.672991 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:33:29.672110 kubelet[3070]: E1108 00:33:29.672087 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:33:33.672582 kubelet[3070]: E1108 00:33:33.672507 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:33:33.673080 kubelet[3070]: E1108 00:33:33.672923 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:33:37.672168 kubelet[3070]: E1108 00:33:37.672133 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:33:37.672561 kubelet[3070]: E1108 00:33:37.672374 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:33:38.673455 kubelet[3070]: E1108 00:33:38.673350 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:33:42.672453 kubelet[3070]: E1108 00:33:42.672380 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:33:45.672205 kubelet[3070]: E1108 00:33:45.672180 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:33:46.672042 kubelet[3070]: E1108 00:33:46.671990 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:33:48.672076 kubelet[3070]: E1108 00:33:48.672052 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:33:48.672347 kubelet[3070]: E1108 00:33:48.672257 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:33:50.673897 kubelet[3070]: E1108 00:33:50.673790 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:33:53.671890 kubelet[3070]: E1108 00:33:53.671864 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:33:59.674306 kubelet[3070]: E1108 00:33:59.674193 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:34:00.674930 kubelet[3070]: E1108 00:34:00.674832 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:34:00.674930 kubelet[3070]: E1108 00:34:00.674876 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:34:02.673295 kubelet[3070]: E1108 00:34:02.673164 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:34:04.672551 kubelet[3070]: E1108 00:34:04.672527 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:34:05.672877 kubelet[3070]: E1108 00:34:05.672841 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:34:11.676350 kubelet[3070]: E1108 00:34:11.676221 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:34:14.671978 kubelet[3070]: E1108 00:34:14.671953 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:34:15.674548 kubelet[3070]: E1108 00:34:15.674431 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:34:17.671888 kubelet[3070]: E1108 00:34:17.671864 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:34:19.672103 kubelet[3070]: E1108 00:34:19.672074 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:34:19.672103 kubelet[3070]: E1108 00:34:19.672078 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:34:23.674788 kubelet[3070]: E1108 00:34:23.674695 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:34:27.675091 kubelet[3070]: E1108 00:34:27.674854 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:34:28.672595 kubelet[3070]: E1108 00:34:28.672570 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:34:30.673185 kubelet[3070]: E1108 00:34:30.673054 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:34:32.673051 kubelet[3070]: E1108 00:34:32.672949 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:34:32.674126 kubelet[3070]: E1108 00:34:32.673303 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:34:37.674780 kubelet[3070]: E1108 00:34:37.674730 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:34:38.673283 kubelet[3070]: E1108 00:34:38.673224 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:34:43.673819 kubelet[3070]: E1108 00:34:43.673716 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:34:43.673819 kubelet[3070]: E1108 00:34:43.673759 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:34:44.671851 kubelet[3070]: E1108 00:34:44.671798 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:34:47.674102 kubelet[3070]: E1108 00:34:47.674011 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:34:50.671825 kubelet[3070]: E1108 00:34:50.671797 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:34:52.672343 kubelet[3070]: E1108 00:34:52.672320 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:34:56.673633 kubelet[3070]: E1108 00:34:56.673495 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:34:57.671981 kubelet[3070]: E1108 00:34:57.671960 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:34:58.672740 kubelet[3070]: E1108 00:34:58.672672 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:35:00.671911 kubelet[3070]: E1108 00:35:00.671879 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:35:04.674384 kubelet[3070]: E1108 00:35:04.674292 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:35:06.672659 kubelet[3070]: E1108 00:35:06.672589 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:35:08.672041 kubelet[3070]: E1108 00:35:08.672013 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:35:12.674164 containerd[1809]: time="2025-11-08T00:35:12.674058232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:35:13.060326 containerd[1809]: time="2025-11-08T00:35:13.060193784Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:13.060962 containerd[1809]: time="2025-11-08T00:35:13.060921100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:35:13.060996 containerd[1809]: time="2025-11-08T00:35:13.060963459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:35:13.061070 kubelet[3070]: E1108 00:35:13.061050 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:13.061358 kubelet[3070]: E1108 00:35:13.061079 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:13.061358 kubelet[3070]: E1108 00:35:13.061225 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6694c6b5c5-rk6lq_calico-apiserver(a4457e65-0840-44a3-9b91-05cc2050df9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:13.061358 kubelet[3070]: E1108 00:35:13.061250 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:35:13.061453 containerd[1809]: time="2025-11-08T00:35:13.061355388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:35:13.435315 containerd[1809]: time="2025-11-08T00:35:13.435193610Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:13.435740 containerd[1809]: time="2025-11-08T00:35:13.435664032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:35:13.435824 containerd[1809]: time="2025-11-08T00:35:13.435729310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:35:13.435965 kubelet[3070]: E1108 00:35:13.435882 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:13.435965 kubelet[3070]: E1108 00:35:13.435943 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:35:13.436052 kubelet[3070]: E1108 00:35:13.435986 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6694c6b5c5-xb2cq_calico-apiserver(5ec5b66b-733a-489d-9c96-c95ce9255384): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:13.436052 kubelet[3070]: E1108 00:35:13.436009 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:35:13.672618 kubelet[3070]: E1108 00:35:13.672594 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:35:17.672692 kubelet[3070]: E1108 00:35:17.672639 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:35:20.674745 containerd[1809]: time="2025-11-08T00:35:20.674616820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:35:21.068384 containerd[1809]: time="2025-11-08T00:35:21.068286729Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:21.069348 containerd[1809]: time="2025-11-08T00:35:21.069307449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:35:21.069400 containerd[1809]: time="2025-11-08T00:35:21.069376716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:35:21.069498 kubelet[3070]: E1108 00:35:21.069475 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:35:21.069684 kubelet[3070]: E1108 00:35:21.069505 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:35:21.069684 kubelet[3070]: E1108 00:35:21.069566 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-8665b9889f-q5txb_calico-system(225a8bd8-1a26-4c77-ba47-4755836593e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:21.069994 containerd[1809]: time="2025-11-08T00:35:21.069983575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:35:21.444803 containerd[1809]: time="2025-11-08T00:35:21.444748469Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:21.445355 containerd[1809]: time="2025-11-08T00:35:21.445280861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:35:21.445395 containerd[1809]: time="2025-11-08T00:35:21.445349730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:35:21.445526 kubelet[3070]: E1108 00:35:21.445468 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:35:21.445526 kubelet[3070]: E1108 00:35:21.445498 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:35:21.445618 kubelet[3070]: E1108 00:35:21.445545 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-8665b9889f-q5txb_calico-system(225a8bd8-1a26-4c77-ba47-4755836593e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:21.445618 kubelet[3070]: E1108 00:35:21.445571 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:35:21.672711 containerd[1809]: time="2025-11-08T00:35:21.672688570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:35:22.048196 containerd[1809]: time="2025-11-08T00:35:22.048162245Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:22.048716 containerd[1809]: time="2025-11-08T00:35:22.048653620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:35:22.048754 containerd[1809]: time="2025-11-08T00:35:22.048720303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:35:22.048826 kubelet[3070]: E1108 00:35:22.048804 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:35:22.048874 kubelet[3070]: E1108 00:35:22.048833 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:35:22.048895 kubelet[3070]: E1108 00:35:22.048881 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t42z5_calico-system(d510fe8b-db97-40db-ab28-3634909f38a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:22.048919 kubelet[3070]: E1108 00:35:22.048900 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:35:23.673714 kubelet[3070]: E1108 00:35:23.673622 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:35:25.671947 kubelet[3070]: E1108 00:35:25.671917 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:35:28.674114 containerd[1809]: time="2025-11-08T00:35:28.674028553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:35:29.032520 containerd[1809]: time="2025-11-08T00:35:29.032423719Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:29.033202 containerd[1809]: time="2025-11-08T00:35:29.033173827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:35:29.033263 containerd[1809]: time="2025-11-08T00:35:29.033227064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:35:29.033338 kubelet[3070]: E1108 00:35:29.033314 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:35:29.033548 kubelet[3070]: E1108 00:35:29.033344 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:35:29.033548 kubelet[3070]: E1108 00:35:29.033509 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7c8d496dff-jlg6z_calico-system(7c46dfff-678e-44bc-9089-cef43e8fa0d3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:29.033548 kubelet[3070]: E1108 00:35:29.033533 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:35:29.033640 containerd[1809]: time="2025-11-08T00:35:29.033559112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:35:29.411986 containerd[1809]: time="2025-11-08T00:35:29.411877923Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:29.412663 containerd[1809]: time="2025-11-08T00:35:29.412575688Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:35:29.412726 containerd[1809]: time="2025-11-08T00:35:29.412665286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:35:29.412795 kubelet[3070]: E1108 00:35:29.412773 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:35:29.412844 kubelet[3070]: E1108 00:35:29.412803 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:35:29.412878 kubelet[3070]: E1108 00:35:29.412859 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-njlbj_calico-system(2db14322-3de3-476c-bc43-59b2bd1acea4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:29.413372 containerd[1809]: time="2025-11-08T00:35:29.413329660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:35:29.779892 containerd[1809]: time="2025-11-08T00:35:29.779863168Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:35:29.780314 containerd[1809]: time="2025-11-08T00:35:29.780276146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:35:29.780369 containerd[1809]: time="2025-11-08T00:35:29.780326028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:35:29.780509 kubelet[3070]: E1108 00:35:29.780434 3070 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:35:29.780509 kubelet[3070]: E1108 00:35:29.780500 3070 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:35:29.780574 kubelet[3070]: E1108 00:35:29.780559 3070 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-njlbj_calico-system(2db14322-3de3-476c-bc43-59b2bd1acea4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:35:29.780632 kubelet[3070]: E1108 00:35:29.780605 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:35:34.675032 kubelet[3070]: E1108 00:35:34.674923 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:35:36.674706 kubelet[3070]: E1108 00:35:36.674616 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:35:38.673425 kubelet[3070]: E1108 00:35:38.673361 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:35:39.673939 kubelet[3070]: E1108 00:35:39.673835 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:35:42.674728 kubelet[3070]: E1108 00:35:42.674627 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:35:44.673849 kubelet[3070]: E1108 00:35:44.673755 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:35:49.672541 kubelet[3070]: E1108 00:35:49.672507 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:35:49.673250 kubelet[3070]: E1108 00:35:49.673221 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:35:50.672258 kubelet[3070]: E1108 00:35:50.672212 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:35:50.950622 systemd[1]: Started sshd@9-139.178.94.39:22-139.178.68.195:35788.service - OpenSSH per-connection server daemon (139.178.68.195:35788). Nov 8 00:35:51.023515 sshd[7982]: Accepted publickey for core from 139.178.68.195 port 35788 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:35:51.024477 sshd[7982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:51.027578 systemd-logind[1799]: New session 12 of user core. Nov 8 00:35:51.046311 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:35:51.143545 sshd[7982]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:51.145199 systemd[1]: sshd@9-139.178.94.39:22-139.178.68.195:35788.service: Deactivated successfully. Nov 8 00:35:51.146247 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:35:51.147000 systemd-logind[1799]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:35:51.147670 systemd-logind[1799]: Removed session 12. Nov 8 00:35:51.674806 kubelet[3070]: E1108 00:35:51.674662 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:35:56.164055 systemd[1]: Started sshd@10-139.178.94.39:22-139.178.68.195:54650.service - OpenSSH per-connection server daemon (139.178.68.195:54650). Nov 8 00:35:56.199091 sshd[8016]: Accepted publickey for core from 139.178.68.195 port 54650 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:35:56.199849 sshd[8016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:35:56.202394 systemd-logind[1799]: New session 13 of user core. Nov 8 00:35:56.216349 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:35:56.336466 sshd[8016]: pam_unix(sshd:session): session closed for user core Nov 8 00:35:56.338650 systemd[1]: sshd@10-139.178.94.39:22-139.178.68.195:54650.service: Deactivated successfully. Nov 8 00:35:56.339866 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:35:56.340369 systemd-logind[1799]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:35:56.340910 systemd-logind[1799]: Removed session 13. Nov 8 00:35:56.672101 kubelet[3070]: E1108 00:35:56.672054 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:35:57.673308 kubelet[3070]: E1108 00:35:57.673264 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:36:00.674199 kubelet[3070]: E1108 00:36:00.674053 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:36:01.365366 systemd[1]: Started sshd@11-139.178.94.39:22-139.178.68.195:54652.service - OpenSSH per-connection server daemon (139.178.68.195:54652). Nov 8 00:36:01.394730 sshd[8044]: Accepted publickey for core from 139.178.68.195 port 54652 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:36:01.395583 sshd[8044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:01.398110 systemd-logind[1799]: New session 14 of user core. Nov 8 00:36:01.407297 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:36:01.493820 sshd[8044]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:01.508050 systemd[1]: sshd@11-139.178.94.39:22-139.178.68.195:54652.service: Deactivated successfully. Nov 8 00:36:01.508956 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:36:01.509655 systemd-logind[1799]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:36:01.510309 systemd[1]: Started sshd@12-139.178.94.39:22-139.178.68.195:54662.service - OpenSSH per-connection server daemon (139.178.68.195:54662). Nov 8 00:36:01.510835 systemd-logind[1799]: Removed session 14. Nov 8 00:36:01.542067 sshd[8071]: Accepted publickey for core from 139.178.68.195 port 54662 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:36:01.542848 sshd[8071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:01.545528 systemd-logind[1799]: New session 15 of user core. Nov 8 00:36:01.559398 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:36:01.657641 sshd[8071]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:01.667105 systemd[1]: sshd@12-139.178.94.39:22-139.178.68.195:54662.service: Deactivated successfully. Nov 8 00:36:01.668249 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:36:01.668973 systemd-logind[1799]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:36:01.669731 systemd[1]: Started sshd@13-139.178.94.39:22-139.178.68.195:54672.service - OpenSSH per-connection server daemon (139.178.68.195:54672). Nov 8 00:36:01.670255 systemd-logind[1799]: Removed session 15. Nov 8 00:36:01.672289 kubelet[3070]: E1108 00:36:01.672257 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:36:01.701392 sshd[8095]: Accepted publickey for core from 139.178.68.195 port 54672 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:36:01.702195 sshd[8095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:01.704874 systemd-logind[1799]: New session 16 of user core. Nov 8 00:36:01.726364 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:36:01.804013 sshd[8095]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:01.806043 systemd[1]: sshd@13-139.178.94.39:22-139.178.68.195:54672.service: Deactivated successfully. Nov 8 00:36:01.806949 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:36:01.807368 systemd-logind[1799]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:36:01.807838 systemd-logind[1799]: Removed session 16. Nov 8 00:36:02.673927 kubelet[3070]: E1108 00:36:02.673800 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:36:04.671885 kubelet[3070]: E1108 00:36:04.671828 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:36:06.839949 systemd[1]: Started sshd@14-139.178.94.39:22-139.178.68.195:57280.service - OpenSSH per-connection server daemon (139.178.68.195:57280). Nov 8 00:36:06.902443 sshd[8143]: Accepted publickey for core from 139.178.68.195 port 57280 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:36:06.903382 sshd[8143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:06.906415 systemd-logind[1799]: New session 17 of user core. Nov 8 00:36:06.922398 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:36:07.050983 sshd[8143]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:07.052554 systemd[1]: sshd@14-139.178.94.39:22-139.178.68.195:57280.service: Deactivated successfully. Nov 8 00:36:07.053482 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:36:07.054168 systemd-logind[1799]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:36:07.054894 systemd-logind[1799]: Removed session 17. Nov 8 00:36:08.673752 kubelet[3070]: E1108 00:36:08.673663 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:36:11.676415 kubelet[3070]: E1108 00:36:11.676284 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:36:12.066813 systemd[1]: Started sshd@15-139.178.94.39:22-139.178.68.195:57286.service - OpenSSH per-connection server daemon (139.178.68.195:57286). Nov 8 00:36:12.145503 sshd[8181]: Accepted publickey for core from 139.178.68.195 port 57286 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:36:12.147523 sshd[8181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:12.151834 systemd-logind[1799]: New session 18 of user core. Nov 8 00:36:12.164344 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:36:12.293325 sshd[8181]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:12.295320 systemd[1]: sshd@15-139.178.94.39:22-139.178.68.195:57286.service: Deactivated successfully. Nov 8 00:36:12.296308 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:36:12.296685 systemd-logind[1799]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:36:12.297148 systemd-logind[1799]: Removed session 18. Nov 8 00:36:12.675242 kubelet[3070]: E1108 00:36:12.675197 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:36:14.674219 kubelet[3070]: E1108 00:36:14.674119 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:36:14.674219 kubelet[3070]: E1108 00:36:14.674170 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:36:15.673815 kubelet[3070]: E1108 00:36:15.673723 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:36:17.340951 systemd[1]: Started sshd@16-139.178.94.39:22-139.178.68.195:59908.service - OpenSSH per-connection server daemon (139.178.68.195:59908). Nov 8 00:36:17.400901 sshd[8207]: Accepted publickey for core from 139.178.68.195 port 59908 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:36:17.401747 sshd[8207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:17.404272 systemd-logind[1799]: New session 19 of user core. Nov 8 00:36:17.419415 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:36:17.498088 sshd[8207]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:17.499915 systemd[1]: sshd@16-139.178.94.39:22-139.178.68.195:59908.service: Deactivated successfully. Nov 8 00:36:17.500840 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:36:17.501172 systemd-logind[1799]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:36:17.501715 systemd-logind[1799]: Removed session 19. Nov 8 00:36:22.512872 systemd[1]: Started sshd@17-139.178.94.39:22-139.178.68.195:59914.service - OpenSSH per-connection server daemon (139.178.68.195:59914). Nov 8 00:36:22.544893 sshd[8265]: Accepted publickey for core from 139.178.68.195 port 59914 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:36:22.545649 sshd[8265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:22.548096 systemd-logind[1799]: New session 20 of user core. Nov 8 00:36:22.563338 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:36:22.656171 sshd[8265]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:22.672452 kubelet[3070]: E1108 00:36:22.672425 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c8d496dff-jlg6z" podUID="7c46dfff-678e-44bc-9089-cef43e8fa0d3" Nov 8 00:36:22.673174 systemd[1]: sshd@17-139.178.94.39:22-139.178.68.195:59914.service: Deactivated successfully. Nov 8 00:36:22.674228 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:36:22.675057 systemd-logind[1799]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:36:22.675880 systemd[1]: Started sshd@18-139.178.94.39:22-139.178.68.195:59924.service - OpenSSH per-connection server daemon (139.178.68.195:59924). Nov 8 00:36:22.676558 systemd-logind[1799]: Removed session 20. Nov 8 00:36:22.710361 sshd[8291]: Accepted publickey for core from 139.178.68.195 port 59924 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:36:22.711196 sshd[8291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:22.713679 systemd-logind[1799]: New session 21 of user core. Nov 8 00:36:22.735418 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:36:22.921053 sshd[8291]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:22.938740 systemd[1]: sshd@18-139.178.94.39:22-139.178.68.195:59924.service: Deactivated successfully. Nov 8 00:36:22.943002 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:36:22.946565 systemd-logind[1799]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:36:22.956890 systemd[1]: Started sshd@19-139.178.94.39:22-139.178.68.195:59932.service - OpenSSH per-connection server daemon (139.178.68.195:59932). Nov 8 00:36:22.959427 systemd-logind[1799]: Removed session 21. Nov 8 00:36:23.021643 sshd[8316]: Accepted publickey for core from 139.178.68.195 port 59932 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:36:23.022550 sshd[8316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:23.025667 systemd-logind[1799]: New session 22 of user core. Nov 8 00:36:23.035411 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:36:23.642952 sshd[8316]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:23.654851 systemd[1]: sshd@19-139.178.94.39:22-139.178.68.195:59932.service: Deactivated successfully. Nov 8 00:36:23.655710 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:36:23.656484 systemd-logind[1799]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:36:23.657153 systemd[1]: Started sshd@20-139.178.94.39:22-139.178.68.195:41386.service - OpenSSH per-connection server daemon (139.178.68.195:41386). Nov 8 00:36:23.657597 systemd-logind[1799]: Removed session 22. Nov 8 00:36:23.692116 sshd[8347]: Accepted publickey for core from 139.178.68.195 port 41386 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:36:23.695681 sshd[8347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:23.706392 systemd-logind[1799]: New session 23 of user core. Nov 8 00:36:23.726534 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:36:23.879583 sshd[8347]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:23.890825 systemd[1]: sshd@20-139.178.94.39:22-139.178.68.195:41386.service: Deactivated successfully. Nov 8 00:36:23.891671 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:36:23.892332 systemd-logind[1799]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:36:23.892988 systemd[1]: Started sshd@21-139.178.94.39:22-139.178.68.195:41390.service - OpenSSH per-connection server daemon (139.178.68.195:41390). Nov 8 00:36:23.893489 systemd-logind[1799]: Removed session 23. Nov 8 00:36:23.925620 sshd[8371]: Accepted publickey for core from 139.178.68.195 port 41390 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:36:23.929062 sshd[8371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:23.939994 systemd-logind[1799]: New session 24 of user core. Nov 8 00:36:23.964404 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:36:24.045382 sshd[8371]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:24.047329 systemd[1]: sshd@21-139.178.94.39:22-139.178.68.195:41390.service: Deactivated successfully. Nov 8 00:36:24.048194 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:36:24.048601 systemd-logind[1799]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:36:24.049111 systemd-logind[1799]: Removed session 24. Nov 8 00:36:26.676006 kubelet[3070]: E1108 00:36:26.675898 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-njlbj" podUID="2db14322-3de3-476c-bc43-59b2bd1acea4" Nov 8 00:36:27.672171 kubelet[3070]: E1108 00:36:27.672124 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t42z5" podUID="d510fe8b-db97-40db-ab28-3634909f38a6" Nov 8 00:36:27.672625 kubelet[3070]: E1108 00:36:27.672597 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8665b9889f-q5txb" podUID="225a8bd8-1a26-4c77-ba47-4755836593e3" Nov 8 00:36:28.672193 kubelet[3070]: E1108 00:36:28.672167 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-rk6lq" podUID="a4457e65-0840-44a3-9b91-05cc2050df9f" Nov 8 00:36:29.061566 systemd[1]: Started sshd@22-139.178.94.39:22-139.178.68.195:41394.service - OpenSSH per-connection server daemon (139.178.68.195:41394). Nov 8 00:36:29.120827 sshd[8404]: Accepted publickey for core from 139.178.68.195 port 41394 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:36:29.122249 sshd[8404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:29.126599 systemd-logind[1799]: New session 25 of user core. Nov 8 00:36:29.137384 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:36:29.248333 sshd[8404]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:29.249979 systemd[1]: sshd@22-139.178.94.39:22-139.178.68.195:41394.service: Deactivated successfully. Nov 8 00:36:29.250876 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:36:29.251601 systemd-logind[1799]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:36:29.252076 systemd-logind[1799]: Removed session 25. Nov 8 00:36:29.673248 kubelet[3070]: E1108 00:36:29.673200 3070 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6694c6b5c5-xb2cq" podUID="5ec5b66b-733a-489d-9c96-c95ce9255384" Nov 8 00:36:34.285876 systemd[1]: Started sshd@23-139.178.94.39:22-139.178.68.195:60550.service - OpenSSH per-connection server daemon (139.178.68.195:60550). Nov 8 00:36:34.350860 sshd[8431]: Accepted publickey for core from 139.178.68.195 port 60550 ssh2: RSA SHA256:CDEH3Gh6VSwb5luG5uhujouIqwp740QGMGXihV+mnVQ Nov 8 00:36:34.351839 sshd[8431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:36:34.354984 systemd-logind[1799]: New session 26 of user core. Nov 8 00:36:34.364307 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:36:34.482273 sshd[8431]: pam_unix(sshd:session): session closed for user core Nov 8 00:36:34.483923 systemd[1]: sshd@23-139.178.94.39:22-139.178.68.195:60550.service: Deactivated successfully. Nov 8 00:36:34.484921 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:36:34.485718 systemd-logind[1799]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:36:34.486330 systemd-logind[1799]: Removed session 26.