Nov 1 01:00:29.033693 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 01:00:29.033708 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:00:29.033715 kernel: BIOS-provided physical RAM map: Nov 1 01:00:29.033719 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 1 01:00:29.033723 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 1 01:00:29.033727 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 1 01:00:29.033732 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 1 01:00:29.033736 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 1 01:00:29.033740 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081a73fff] usable Nov 1 01:00:29.033744 kernel: BIOS-e820: [mem 0x0000000081a74000-0x0000000081a74fff] ACPI NVS Nov 1 01:00:29.033748 kernel: BIOS-e820: [mem 0x0000000081a75000-0x0000000081a75fff] reserved Nov 1 01:00:29.033753 kernel: BIOS-e820: [mem 0x0000000081a76000-0x000000008afcdfff] usable Nov 1 01:00:29.033757 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Nov 1 01:00:29.033761 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Nov 1 01:00:29.033767 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Nov 1 01:00:29.033771 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Nov 1 01:00:29.033777 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 1 01:00:29.033782 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 1 01:00:29.033786 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 01:00:29.033791 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 1 01:00:29.033795 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 1 01:00:29.033800 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 1 01:00:29.033804 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 1 01:00:29.033809 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 1 01:00:29.033814 kernel: NX (Execute Disable) protection: active Nov 1 01:00:29.033818 kernel: APIC: Static calls initialized Nov 1 01:00:29.033823 kernel: SMBIOS 3.2.1 present. Nov 1 01:00:29.033828 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 2.6 12/03/2024 Nov 1 01:00:29.033833 kernel: tsc: Detected 3400.000 MHz processor Nov 1 01:00:29.033838 kernel: tsc: Detected 3399.906 MHz TSC Nov 1 01:00:29.033843 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 01:00:29.033848 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 01:00:29.033853 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 1 01:00:29.033857 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Nov 1 01:00:29.033862 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 01:00:29.033867 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 1 01:00:29.033871 kernel: Using GB pages for direct mapping Nov 1 01:00:29.033877 kernel: ACPI: Early table checksum verification disabled Nov 1 01:00:29.033882 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 1 01:00:29.033886 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 1 01:00:29.033893 kernel: ACPI: FACP 0x000000008C58B670 000114 (v06 01072009 AMI 00010013) Nov 1 01:00:29.033898 kernel: ACPI: DSDT 0x000000008C54F268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 1 01:00:29.033903 kernel: ACPI: FACS 0x000000008C66DF80 000040 Nov 1 01:00:29.033908 kernel: ACPI: APIC 0x000000008C58B788 00012C (v04 01072009 AMI 00010013) Nov 1 01:00:29.033914 kernel: ACPI: FPDT 0x000000008C58B8B8 000044 (v01 01072009 AMI 00010013) Nov 1 01:00:29.033919 kernel: ACPI: FIDT 0x000000008C58B900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 1 01:00:29.033924 kernel: ACPI: MCFG 0x000000008C58B9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 1 01:00:29.033929 kernel: ACPI: SPMI 0x000000008C58B9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 1 01:00:29.033934 kernel: ACPI: SSDT 0x000000008C58BA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 1 01:00:29.033939 kernel: ACPI: SSDT 0x000000008C58D548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 1 01:00:29.033943 kernel: ACPI: SSDT 0x000000008C590710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 1 01:00:29.033949 kernel: ACPI: HPET 0x000000008C592A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:00:29.033954 kernel: ACPI: SSDT 0x000000008C592A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 1 01:00:29.033959 kernel: ACPI: SSDT 0x000000008C593A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 1 01:00:29.033964 kernel: ACPI: UEFI 0x000000008C594320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:00:29.033969 kernel: ACPI: LPIT 0x000000008C594368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:00:29.033974 kernel: ACPI: SSDT 0x000000008C594400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 1 01:00:29.033979 kernel: ACPI: SSDT 0x000000008C596BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 1 01:00:29.033984 kernel: ACPI: DBGP 0x000000008C5980C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:00:29.033989 kernel: ACPI: DBG2 0x000000008C598100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:00:29.033995 kernel: ACPI: SSDT 0x000000008C598158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 1 01:00:29.034000 kernel: ACPI: DMAR 0x000000008C599CC0 000070 (v01 INTEL EDK2 00000002 01000013) Nov 1 01:00:29.034005 kernel: ACPI: SSDT 0x000000008C599D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 1 01:00:29.034010 kernel: ACPI: TPM2 0x000000008C599E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 1 01:00:29.034015 kernel: ACPI: SSDT 0x000000008C599EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 1 01:00:29.034019 kernel: ACPI: WSMT 0x000000008C59AC40 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 1 01:00:29.034024 kernel: ACPI: EINJ 0x000000008C59AC68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 1 01:00:29.034029 kernel: ACPI: ERST 0x000000008C59AD98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 1 01:00:29.034035 kernel: ACPI: BERT 0x000000008C59AFC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 1 01:00:29.034040 kernel: ACPI: HEST 0x000000008C59AFF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 1 01:00:29.034045 kernel: ACPI: SSDT 0x000000008C59B278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 1 01:00:29.034050 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b670-0x8c58b783] Nov 1 01:00:29.034055 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b66b] Nov 1 01:00:29.034060 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Nov 1 01:00:29.034065 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b788-0x8c58b8b3] Nov 1 01:00:29.034070 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b8b8-0x8c58b8fb] Nov 1 01:00:29.034075 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b900-0x8c58b99b] Nov 1 01:00:29.034081 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b9a0-0x8c58b9db] Nov 1 01:00:29.034086 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b9e0-0x8c58ba20] Nov 1 01:00:29.034090 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58ba28-0x8c58d543] Nov 1 01:00:29.034095 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d548-0x8c59070d] Nov 1 01:00:29.034100 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590710-0x8c592a3a] Nov 1 01:00:29.034105 kernel: ACPI: Reserving HPET table memory at [mem 0x8c592a40-0x8c592a77] Nov 1 01:00:29.034110 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a78-0x8c593a25] Nov 1 01:00:29.034115 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593a28-0x8c59431b] Nov 1 01:00:29.034120 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c594320-0x8c594361] Nov 1 01:00:29.034126 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c594368-0x8c5943fb] Nov 1 01:00:29.034131 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594400-0x8c596bdd] Nov 1 01:00:29.034136 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596be0-0x8c5980c1] Nov 1 01:00:29.034141 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5980c8-0x8c5980fb] Nov 1 01:00:29.034145 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598100-0x8c598153] Nov 1 01:00:29.034150 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598158-0x8c599cbe] Nov 1 01:00:29.034155 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599cc0-0x8c599d2f] Nov 1 01:00:29.034160 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599d30-0x8c599e73] Nov 1 01:00:29.034165 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599e78-0x8c599eab] Nov 1 01:00:29.034171 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599eb0-0x8c59ac3e] Nov 1 01:00:29.034176 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59ac40-0x8c59ac67] Nov 1 01:00:29.034180 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59ac68-0x8c59ad97] Nov 1 01:00:29.034185 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad98-0x8c59afc7] Nov 1 01:00:29.034190 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59afc8-0x8c59aff7] Nov 1 01:00:29.034195 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59aff8-0x8c59b273] Nov 1 01:00:29.034200 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b278-0x8c59b3d9] Nov 1 01:00:29.034205 kernel: No NUMA configuration found Nov 1 01:00:29.034210 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 1 01:00:29.034215 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 1 01:00:29.034224 kernel: Zone ranges: Nov 1 01:00:29.034229 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 01:00:29.034234 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 01:00:29.034239 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:00:29.034244 kernel: Movable zone start for each node Nov 1 01:00:29.034249 kernel: Early memory node ranges Nov 1 01:00:29.034253 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 1 01:00:29.034258 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 1 01:00:29.034263 kernel: node 0: [mem 0x0000000040400000-0x0000000081a73fff] Nov 1 01:00:29.034269 kernel: node 0: [mem 0x0000000081a76000-0x000000008afcdfff] Nov 1 01:00:29.034274 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Nov 1 01:00:29.034279 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 1 01:00:29.034284 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:00:29.034294 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 1 01:00:29.034299 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 01:00:29.034304 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 1 01:00:29.034309 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 1 01:00:29.034316 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 1 01:00:29.034321 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 1 01:00:29.034326 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Nov 1 01:00:29.034331 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 1 01:00:29.034337 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 1 01:00:29.034342 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 1 01:00:29.034347 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 1 01:00:29.034353 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 1 01:00:29.034358 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 1 01:00:29.034364 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 1 01:00:29.034369 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 1 01:00:29.034375 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 1 01:00:29.034380 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 1 01:00:29.034385 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 1 01:00:29.034390 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 1 01:00:29.034396 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 1 01:00:29.034401 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 1 01:00:29.034406 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 1 01:00:29.034412 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 1 01:00:29.034417 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 1 01:00:29.034423 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 1 01:00:29.034428 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 1 01:00:29.034433 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 1 01:00:29.034438 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 01:00:29.034444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 01:00:29.034449 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 01:00:29.034454 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 01:00:29.034460 kernel: TSC deadline timer available Nov 1 01:00:29.034466 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 1 01:00:29.034471 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 1 01:00:29.034477 kernel: Booting paravirtualized kernel on bare hardware Nov 1 01:00:29.034482 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 01:00:29.034487 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 1 01:00:29.034493 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 1 01:00:29.034498 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 1 01:00:29.034503 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 01:00:29.034510 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:00:29.034515 kernel: random: crng init done Nov 1 01:00:29.034521 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 1 01:00:29.034526 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 1 01:00:29.034531 kernel: Fallback order for Node 0: 0 Nov 1 01:00:29.034537 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Nov 1 01:00:29.034542 kernel: Policy zone: Normal Nov 1 01:00:29.034547 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 01:00:29.034552 kernel: software IO TLB: area num 16. Nov 1 01:00:29.034559 kernel: Memory: 32720312K/33452984K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 732412K reserved, 0K cma-reserved) Nov 1 01:00:29.034564 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 01:00:29.034570 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 01:00:29.034575 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 01:00:29.034580 kernel: Dynamic Preempt: voluntary Nov 1 01:00:29.034586 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 01:00:29.034591 kernel: rcu: RCU event tracing is enabled. Nov 1 01:00:29.034597 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 01:00:29.034602 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 01:00:29.034608 kernel: Rude variant of Tasks RCU enabled. Nov 1 01:00:29.034613 kernel: Tracing variant of Tasks RCU enabled. Nov 1 01:00:29.034619 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 01:00:29.034624 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 01:00:29.034629 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 1 01:00:29.034635 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 01:00:29.034640 kernel: Console: colour dummy device 80x25 Nov 1 01:00:29.034645 kernel: printk: console [tty0] enabled Nov 1 01:00:29.034650 kernel: printk: console [ttyS1] enabled Nov 1 01:00:29.034657 kernel: ACPI: Core revision 20230628 Nov 1 01:00:29.034662 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Nov 1 01:00:29.034667 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 01:00:29.034673 kernel: DMAR: Host address width 39 Nov 1 01:00:29.034678 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 1 01:00:29.034683 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 1 01:00:29.034689 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Nov 1 01:00:29.034694 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 1 01:00:29.034699 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 1 01:00:29.034705 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 1 01:00:29.034711 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 1 01:00:29.034716 kernel: x2apic enabled Nov 1 01:00:29.034721 kernel: APIC: Switched APIC routing to: cluster x2apic Nov 1 01:00:29.034727 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 1 01:00:29.034732 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 1 01:00:29.034738 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 1 01:00:29.034743 kernel: process: using mwait in idle threads Nov 1 01:00:29.034748 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 01:00:29.034754 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 01:00:29.034759 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 01:00:29.034765 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 01:00:29.034770 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 01:00:29.034776 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 01:00:29.034781 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 01:00:29.034786 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 01:00:29.034791 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 01:00:29.034796 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 01:00:29.034802 kernel: TAA: Mitigation: TSX disabled Nov 1 01:00:29.034807 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 1 01:00:29.034812 kernel: SRBDS: Mitigation: Microcode Nov 1 01:00:29.034818 kernel: GDS: Mitigation: Microcode Nov 1 01:00:29.034824 kernel: active return thunk: its_return_thunk Nov 1 01:00:29.034829 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 01:00:29.034834 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Nov 1 01:00:29.034839 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 01:00:29.034845 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 01:00:29.034850 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 01:00:29.034855 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 01:00:29.034860 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 01:00:29.034865 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 01:00:29.034871 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 01:00:29.034877 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 01:00:29.034882 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 1 01:00:29.034887 kernel: Freeing SMP alternatives memory: 32K Nov 1 01:00:29.034893 kernel: pid_max: default: 32768 minimum: 301 Nov 1 01:00:29.034898 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 01:00:29.034903 kernel: landlock: Up and running. Nov 1 01:00:29.034908 kernel: SELinux: Initializing. Nov 1 01:00:29.034914 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:00:29.034919 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:00:29.034924 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 1 01:00:29.034930 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:00:29.034936 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:00:29.034942 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:00:29.034947 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 1 01:00:29.034952 kernel: ... version: 4 Nov 1 01:00:29.034958 kernel: ... bit width: 48 Nov 1 01:00:29.034963 kernel: ... generic registers: 4 Nov 1 01:00:29.034968 kernel: ... value mask: 0000ffffffffffff Nov 1 01:00:29.034974 kernel: ... max period: 00007fffffffffff Nov 1 01:00:29.034979 kernel: ... fixed-purpose events: 3 Nov 1 01:00:29.034985 kernel: ... event mask: 000000070000000f Nov 1 01:00:29.034990 kernel: signal: max sigframe size: 2032 Nov 1 01:00:29.034996 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 1 01:00:29.035001 kernel: rcu: Hierarchical SRCU implementation. Nov 1 01:00:29.035006 kernel: rcu: Max phase no-delay instances is 400. Nov 1 01:00:29.035012 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 1 01:00:29.035017 kernel: smp: Bringing up secondary CPUs ... Nov 1 01:00:29.035022 kernel: smpboot: x86: Booting SMP configuration: Nov 1 01:00:29.035027 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Nov 1 01:00:29.035034 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 01:00:29.035039 kernel: smp: Brought up 1 node, 16 CPUs Nov 1 01:00:29.035045 kernel: smpboot: Max logical packages: 1 Nov 1 01:00:29.035050 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 1 01:00:29.035055 kernel: devtmpfs: initialized Nov 1 01:00:29.035060 kernel: x86/mm: Memory block size: 128MB Nov 1 01:00:29.035066 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81a74000-0x81a74fff] (4096 bytes) Nov 1 01:00:29.035071 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Nov 1 01:00:29.035076 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 01:00:29.035083 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 01:00:29.035088 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 01:00:29.035093 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 01:00:29.035099 kernel: audit: initializing netlink subsys (disabled) Nov 1 01:00:29.035104 kernel: audit: type=2000 audit(1761958823.039:1): state=initialized audit_enabled=0 res=1 Nov 1 01:00:29.035109 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 01:00:29.035114 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 01:00:29.035120 kernel: cpuidle: using governor menu Nov 1 01:00:29.035126 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 01:00:29.035131 kernel: dca service started, version 1.12.1 Nov 1 01:00:29.035136 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 1 01:00:29.035142 kernel: PCI: Using configuration type 1 for base access Nov 1 01:00:29.035147 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 1 01:00:29.035152 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 01:00:29.035158 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 01:00:29.035163 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 01:00:29.035168 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 01:00:29.035174 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 01:00:29.035180 kernel: ACPI: Added _OSI(Module Device) Nov 1 01:00:29.035185 kernel: ACPI: Added _OSI(Processor Device) Nov 1 01:00:29.035190 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 01:00:29.035196 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 1 01:00:29.035201 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:00:29.035206 kernel: ACPI: SSDT 0xFFFF93C601B31C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 1 01:00:29.035212 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:00:29.035217 kernel: ACPI: SSDT 0xFFFF93C601B28000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 1 01:00:29.035225 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:00:29.035231 kernel: ACPI: SSDT 0xFFFF93C600246700 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 1 01:00:29.035236 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:00:29.035241 kernel: ACPI: SSDT 0xFFFF93C601E5D800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 1 01:00:29.035246 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:00:29.035252 kernel: ACPI: SSDT 0xFFFF93C60012A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 1 01:00:29.035257 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:00:29.035262 kernel: ACPI: SSDT 0xFFFF93C601B30800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 1 01:00:29.035267 kernel: ACPI: _OSC evaluated successfully for all CPUs Nov 1 01:00:29.035273 kernel: ACPI: Interpreter enabled Nov 1 01:00:29.035279 kernel: ACPI: PM: (supports S0 S5) Nov 1 01:00:29.035284 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 01:00:29.035290 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 1 01:00:29.035295 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 1 01:00:29.035300 kernel: HEST: Table parsing has been initialized. Nov 1 01:00:29.035305 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 1 01:00:29.035311 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 01:00:29.035316 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 1 01:00:29.035321 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 1 01:00:29.035328 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Nov 1 01:00:29.035333 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Nov 1 01:00:29.035338 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Nov 1 01:00:29.035344 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Nov 1 01:00:29.035349 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Nov 1 01:00:29.035354 kernel: ACPI: \_TZ_.FN00: New power resource Nov 1 01:00:29.035360 kernel: ACPI: \_TZ_.FN01: New power resource Nov 1 01:00:29.035365 kernel: ACPI: \_TZ_.FN02: New power resource Nov 1 01:00:29.035370 kernel: ACPI: \_TZ_.FN03: New power resource Nov 1 01:00:29.035377 kernel: ACPI: \_TZ_.FN04: New power resource Nov 1 01:00:29.035382 kernel: ACPI: \PIN_: New power resource Nov 1 01:00:29.035387 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 1 01:00:29.035463 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 01:00:29.035517 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 1 01:00:29.035567 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 1 01:00:29.035575 kernel: PCI host bridge to bus 0000:00 Nov 1 01:00:29.035628 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 01:00:29.035673 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 01:00:29.035716 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 01:00:29.035759 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 1 01:00:29.035800 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 1 01:00:29.035843 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 1 01:00:29.035900 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 1 01:00:29.035961 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 1 01:00:29.036011 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.036065 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 1 01:00:29.036114 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 1 01:00:29.036168 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 1 01:00:29.036217 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 1 01:00:29.036277 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 1 01:00:29.036327 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 1 01:00:29.036375 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 1 01:00:29.036427 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 1 01:00:29.036475 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 1 01:00:29.036524 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 1 01:00:29.036578 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 1 01:00:29.036627 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:00:29.036682 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 1 01:00:29.036732 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:00:29.036784 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 1 01:00:29.036834 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 1 01:00:29.036884 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 1 01:00:29.036936 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 1 01:00:29.036993 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 1 01:00:29.037044 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 1 01:00:29.037096 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 1 01:00:29.037145 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 1 01:00:29.037194 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 1 01:00:29.037277 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 1 01:00:29.037342 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 1 01:00:29.037390 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 1 01:00:29.037438 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 1 01:00:29.037486 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 1 01:00:29.037534 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 1 01:00:29.037585 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 1 01:00:29.037633 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 1 01:00:29.037685 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 1 01:00:29.037735 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.037793 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 1 01:00:29.037846 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.037900 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 1 01:00:29.037949 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.038003 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 1 01:00:29.038052 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.038106 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Nov 1 01:00:29.038157 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.038210 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 1 01:00:29.038302 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:00:29.038355 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 1 01:00:29.038408 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 1 01:00:29.038458 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 1 01:00:29.038509 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 1 01:00:29.038564 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 1 01:00:29.038613 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 1 01:00:29.038669 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Nov 1 01:00:29.038720 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 1 01:00:29.038770 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 1 01:00:29.038820 kernel: pci 0000:01:00.0: PME# supported from D3cold Nov 1 01:00:29.038872 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:00:29.038923 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:00:29.038977 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Nov 1 01:00:29.039028 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 1 01:00:29.039078 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 1 01:00:29.039128 kernel: pci 0000:01:00.1: PME# supported from D3cold Nov 1 01:00:29.039179 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:00:29.039257 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:00:29.039322 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:00:29.039371 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:00:29.039421 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:00:29.039471 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:00:29.039527 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Nov 1 01:00:29.039577 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:00:29.039631 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 1 01:00:29.039682 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Nov 1 01:00:29.039731 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 1 01:00:29.039782 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.039831 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:00:29.039881 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:00:29.039929 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:00:29.039987 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 1 01:00:29.040037 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:00:29.040088 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 1 01:00:29.040138 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Nov 1 01:00:29.040188 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 1 01:00:29.040268 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.040331 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:00:29.040382 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:00:29.040433 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:00:29.040483 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:00:29.040539 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Nov 1 01:00:29.040589 kernel: pci 0000:06:00.0: enabling Extended Tags Nov 1 01:00:29.040685 kernel: pci 0000:06:00.0: supports D1 D2 Nov 1 01:00:29.040738 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:00:29.040789 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:00:29.040842 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:00:29.040892 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:00:29.040947 kernel: pci_bus 0000:07: extended config space not accessible Nov 1 01:00:29.041005 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Nov 1 01:00:29.041058 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 1 01:00:29.041112 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 1 01:00:29.041167 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Nov 1 01:00:29.041239 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 01:00:29.041310 kernel: pci 0000:07:00.0: supports D1 D2 Nov 1 01:00:29.041363 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:00:29.041412 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:00:29.041465 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:00:29.041516 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:00:29.041524 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 1 01:00:29.041530 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 1 01:00:29.041537 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 1 01:00:29.041543 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 1 01:00:29.041549 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 1 01:00:29.041554 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 1 01:00:29.041560 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 1 01:00:29.041566 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 1 01:00:29.041571 kernel: iommu: Default domain type: Translated Nov 1 01:00:29.041577 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 01:00:29.041583 kernel: PCI: Using ACPI for IRQ routing Nov 1 01:00:29.041589 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 01:00:29.041595 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 1 01:00:29.041600 kernel: e820: reserve RAM buffer [mem 0x81a74000-0x83ffffff] Nov 1 01:00:29.041606 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Nov 1 01:00:29.041611 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Nov 1 01:00:29.041617 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 1 01:00:29.041622 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 1 01:00:29.041673 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Nov 1 01:00:29.041725 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Nov 1 01:00:29.041781 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 01:00:29.041789 kernel: vgaarb: loaded Nov 1 01:00:29.041795 kernel: clocksource: Switched to clocksource tsc-early Nov 1 01:00:29.041801 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 01:00:29.041806 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 01:00:29.041812 kernel: pnp: PnP ACPI init Nov 1 01:00:29.041862 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 1 01:00:29.041912 kernel: pnp 00:02: [dma 0 disabled] Nov 1 01:00:29.041965 kernel: pnp 00:03: [dma 0 disabled] Nov 1 01:00:29.042016 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 1 01:00:29.042062 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 1 01:00:29.042110 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Nov 1 01:00:29.042155 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Nov 1 01:00:29.042201 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Nov 1 01:00:29.042281 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Nov 1 01:00:29.042327 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 1 01:00:29.042374 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 1 01:00:29.042420 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 1 01:00:29.042464 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 1 01:00:29.042513 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Nov 1 01:00:29.042559 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 1 01:00:29.042606 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 1 01:00:29.042651 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 1 01:00:29.042694 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 1 01:00:29.042739 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 1 01:00:29.042783 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Nov 1 01:00:29.042832 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Nov 1 01:00:29.042840 kernel: pnp: PnP ACPI: found 9 devices Nov 1 01:00:29.042848 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 01:00:29.042854 kernel: NET: Registered PF_INET protocol family Nov 1 01:00:29.042859 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:00:29.042865 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 01:00:29.042871 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 01:00:29.042877 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:00:29.042883 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 1 01:00:29.042888 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 1 01:00:29.042894 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:00:29.042900 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:00:29.042906 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 01:00:29.042912 kernel: NET: Registered PF_XDP protocol family Nov 1 01:00:29.042961 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 1 01:00:29.043011 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 1 01:00:29.043060 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 1 01:00:29.043111 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:00:29.043161 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:00:29.043215 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:00:29.043305 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:00:29.043354 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:00:29.043403 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:00:29.043452 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:00:29.043501 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:00:29.043552 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:00:29.043602 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:00:29.043650 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:00:29.043699 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:00:29.043746 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:00:29.043795 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:00:29.043843 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:00:29.043895 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:00:29.043945 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:00:29.043995 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:00:29.044045 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:00:29.044095 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:00:29.044144 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:00:29.044188 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 1 01:00:29.044256 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 01:00:29.044312 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 01:00:29.044359 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 01:00:29.044401 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 1 01:00:29.044444 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 1 01:00:29.044492 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Nov 1 01:00:29.044538 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:00:29.044587 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Nov 1 01:00:29.044635 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Nov 1 01:00:29.044686 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 1 01:00:29.044732 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Nov 1 01:00:29.044780 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Nov 1 01:00:29.044826 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:00:29.044872 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 1 01:00:29.044919 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:00:29.044929 kernel: PCI: CLS 64 bytes, default 64 Nov 1 01:00:29.044935 kernel: DMAR: No ATSR found Nov 1 01:00:29.044941 kernel: DMAR: No SATC found Nov 1 01:00:29.044946 kernel: DMAR: dmar0: Using Queued invalidation Nov 1 01:00:29.044995 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 1 01:00:29.045044 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 1 01:00:29.045094 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 1 01:00:29.045143 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 1 01:00:29.045195 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 1 01:00:29.045279 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 1 01:00:29.045328 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 1 01:00:29.045375 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 1 01:00:29.045424 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 1 01:00:29.045471 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 1 01:00:29.045520 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 1 01:00:29.045566 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 1 01:00:29.045616 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 1 01:00:29.045667 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 1 01:00:29.045716 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 1 01:00:29.045764 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 1 01:00:29.045813 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Nov 1 01:00:29.045861 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 1 01:00:29.045909 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 1 01:00:29.045958 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 1 01:00:29.046006 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 1 01:00:29.046059 kernel: pci 0000:01:00.0: Adding to iommu group 1 Nov 1 01:00:29.046109 kernel: pci 0000:01:00.1: Adding to iommu group 1 Nov 1 01:00:29.046159 kernel: pci 0000:03:00.0: Adding to iommu group 15 Nov 1 01:00:29.046209 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 1 01:00:29.046301 kernel: pci 0000:06:00.0: Adding to iommu group 17 Nov 1 01:00:29.046354 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 1 01:00:29.046363 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 1 01:00:29.046369 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 01:00:29.046376 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Nov 1 01:00:29.046382 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 1 01:00:29.046388 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 1 01:00:29.046394 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 1 01:00:29.046399 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 1 01:00:29.046450 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 1 01:00:29.046459 kernel: Initialise system trusted keyrings Nov 1 01:00:29.046465 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 1 01:00:29.046472 kernel: Key type asymmetric registered Nov 1 01:00:29.046478 kernel: Asymmetric key parser 'x509' registered Nov 1 01:00:29.046483 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 01:00:29.046489 kernel: io scheduler mq-deadline registered Nov 1 01:00:29.046494 kernel: io scheduler kyber registered Nov 1 01:00:29.046500 kernel: io scheduler bfq registered Nov 1 01:00:29.046549 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 1 01:00:29.046597 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Nov 1 01:00:29.046647 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Nov 1 01:00:29.046697 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Nov 1 01:00:29.046746 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Nov 1 01:00:29.046795 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Nov 1 01:00:29.046849 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 1 01:00:29.046858 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 1 01:00:29.046864 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 1 01:00:29.046870 kernel: pstore: Using crash dump compression: deflate Nov 1 01:00:29.046877 kernel: pstore: Registered erst as persistent store backend Nov 1 01:00:29.046883 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 01:00:29.046889 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 01:00:29.046894 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 01:00:29.046900 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 01:00:29.046906 kernel: hpet_acpi_add: no address or irqs in _CRS Nov 1 01:00:29.046954 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 1 01:00:29.046962 kernel: i8042: PNP: No PS/2 controller found. Nov 1 01:00:29.047007 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 1 01:00:29.047054 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 1 01:00:29.047099 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-01T01:00:27 UTC (1761958827) Nov 1 01:00:29.047145 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 1 01:00:29.047153 kernel: intel_pstate: Intel P-state driver initializing Nov 1 01:00:29.047159 kernel: intel_pstate: Disabling energy efficiency optimization Nov 1 01:00:29.047164 kernel: intel_pstate: HWP enabled Nov 1 01:00:29.047170 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 1 01:00:29.047176 kernel: vesafb: scrolling: redraw Nov 1 01:00:29.047183 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 1 01:00:29.047189 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000002b367def, using 768k, total 768k Nov 1 01:00:29.047195 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 01:00:29.047200 kernel: fb0: VESA VGA frame buffer device Nov 1 01:00:29.047206 kernel: NET: Registered PF_INET6 protocol family Nov 1 01:00:29.047212 kernel: Segment Routing with IPv6 Nov 1 01:00:29.047217 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 01:00:29.047225 kernel: NET: Registered PF_PACKET protocol family Nov 1 01:00:29.047231 kernel: Key type dns_resolver registered Nov 1 01:00:29.047257 kernel: microcode: Current revision: 0x00000102 Nov 1 01:00:29.047263 kernel: microcode: Microcode Update Driver: v2.2. Nov 1 01:00:29.047284 kernel: IPI shorthand broadcast: enabled Nov 1 01:00:29.047289 kernel: sched_clock: Marking stable (1561092791, 1369103285)->(4401430732, -1471234656) Nov 1 01:00:29.047295 kernel: registered taskstats version 1 Nov 1 01:00:29.047301 kernel: Loading compiled-in X.509 certificates Nov 1 01:00:29.047307 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 01:00:29.047312 kernel: Key type .fscrypt registered Nov 1 01:00:29.047318 kernel: Key type fscrypt-provisioning registered Nov 1 01:00:29.047324 kernel: ima: Allocated hash algorithm: sha1 Nov 1 01:00:29.047330 kernel: ima: No architecture policies found Nov 1 01:00:29.047335 kernel: clk: Disabling unused clocks Nov 1 01:00:29.047341 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 01:00:29.047347 kernel: Write protecting the kernel read-only data: 36864k Nov 1 01:00:29.047352 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 01:00:29.047358 kernel: Run /init as init process Nov 1 01:00:29.047364 kernel: with arguments: Nov 1 01:00:29.047369 kernel: /init Nov 1 01:00:29.047376 kernel: with environment: Nov 1 01:00:29.047381 kernel: HOME=/ Nov 1 01:00:29.047387 kernel: TERM=linux Nov 1 01:00:29.047394 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 01:00:29.047402 systemd[1]: Detected architecture x86-64. Nov 1 01:00:29.047408 systemd[1]: Running in initrd. Nov 1 01:00:29.047414 systemd[1]: No hostname configured, using default hostname. Nov 1 01:00:29.047420 systemd[1]: Hostname set to . Nov 1 01:00:29.047426 systemd[1]: Initializing machine ID from random generator. Nov 1 01:00:29.047432 systemd[1]: Queued start job for default target initrd.target. Nov 1 01:00:29.047438 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:00:29.047444 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:00:29.047450 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 01:00:29.047456 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 01:00:29.047462 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 01:00:29.047469 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 01:00:29.047475 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 01:00:29.047482 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 01:00:29.047487 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Nov 1 01:00:29.047493 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Nov 1 01:00:29.047499 kernel: clocksource: Switched to clocksource tsc Nov 1 01:00:29.047505 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:00:29.047512 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:00:29.047518 systemd[1]: Reached target paths.target - Path Units. Nov 1 01:00:29.047524 systemd[1]: Reached target slices.target - Slice Units. Nov 1 01:00:29.047529 systemd[1]: Reached target swap.target - Swaps. Nov 1 01:00:29.047535 systemd[1]: Reached target timers.target - Timer Units. Nov 1 01:00:29.047541 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:00:29.047547 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:00:29.047553 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 01:00:29.047559 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 01:00:29.047566 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:00:29.047572 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 01:00:29.047578 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:00:29.047584 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 01:00:29.047590 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 01:00:29.047596 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 01:00:29.047602 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 01:00:29.047607 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 01:00:29.047614 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 01:00:29.047630 systemd-journald[267]: Collecting audit messages is disabled. Nov 1 01:00:29.047645 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 01:00:29.047651 systemd-journald[267]: Journal started Nov 1 01:00:29.047665 systemd-journald[267]: Runtime Journal (/run/log/journal/6233a103c9ef471e82b632159102431c) is 8.0M, max 639.9M, 631.9M free. Nov 1 01:00:29.070641 systemd-modules-load[269]: Inserted module 'overlay' Nov 1 01:00:29.091222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:00:29.111299 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 01:00:29.120650 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 01:00:29.120837 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:00:29.120930 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 01:00:29.166266 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 01:00:29.184834 systemd-modules-load[269]: Inserted module 'br_netfilter' Nov 1 01:00:29.186617 kernel: Bridge firewalling registered Nov 1 01:00:29.186704 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 01:00:29.207853 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 01:00:29.241680 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 01:00:29.251490 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:00:29.272763 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:00:29.293857 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:00:29.339454 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:00:29.339933 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 01:00:29.340359 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 01:00:29.346143 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:00:29.346657 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:00:29.347672 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 01:00:29.350471 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:00:29.361995 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 01:00:29.372745 systemd-resolved[306]: Positive Trust Anchors: Nov 1 01:00:29.372752 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:00:29.372787 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 01:00:29.375164 systemd-resolved[306]: Defaulting to hostname 'linux'. Nov 1 01:00:29.491442 dracut-cmdline[308]: dracut-dracut-053 Nov 1 01:00:29.491442 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:00:29.383459 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 01:00:29.401454 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:00:29.648252 kernel: SCSI subsystem initialized Nov 1 01:00:29.672255 kernel: Loading iSCSI transport class v2.0-870. Nov 1 01:00:29.695251 kernel: iscsi: registered transport (tcp) Nov 1 01:00:29.727556 kernel: iscsi: registered transport (qla4xxx) Nov 1 01:00:29.727574 kernel: QLogic iSCSI HBA Driver Nov 1 01:00:29.760316 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 01:00:29.791505 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 01:00:29.848057 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 01:00:29.848080 kernel: device-mapper: uevent: version 1.0.3 Nov 1 01:00:29.859253 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 01:00:29.926277 kernel: raid6: avx2x4 gen() 53094 MB/s Nov 1 01:00:29.958285 kernel: raid6: avx2x2 gen() 53301 MB/s Nov 1 01:00:29.994555 kernel: raid6: avx2x1 gen() 45106 MB/s Nov 1 01:00:29.994572 kernel: raid6: using algorithm avx2x2 gen() 53301 MB/s Nov 1 01:00:30.041606 kernel: raid6: .... xor() 32589 MB/s, rmw enabled Nov 1 01:00:30.041622 kernel: raid6: using avx2x2 recovery algorithm Nov 1 01:00:30.083271 kernel: xor: automatically using best checksumming function avx Nov 1 01:00:30.200228 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 01:00:30.205835 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:00:30.237608 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:00:30.244554 systemd-udevd[493]: Using default interface naming scheme 'v255'. Nov 1 01:00:30.248343 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:00:30.282478 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 01:00:30.352248 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Nov 1 01:00:30.381095 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:00:30.391479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 01:00:30.479151 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:00:30.513026 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 01:00:30.513065 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 01:00:30.539226 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 01:00:30.544227 kernel: libata version 3.00 loaded. Nov 1 01:00:30.566032 kernel: ACPI: bus type USB registered Nov 1 01:00:30.566050 kernel: usbcore: registered new interface driver usbfs Nov 1 01:00:30.581136 kernel: usbcore: registered new interface driver hub Nov 1 01:00:30.595769 kernel: usbcore: registered new device driver usb Nov 1 01:00:30.608406 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 01:00:30.672958 kernel: PTP clock support registered Nov 1 01:00:30.672974 kernel: ahci 0000:00:17.0: version 3.0 Nov 1 01:00:30.673149 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Nov 1 01:00:30.673247 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 01:00:30.673256 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 1 01:00:30.673321 kernel: AES CTR mode by8 optimization enabled Nov 1 01:00:30.651607 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 01:00:30.705834 kernel: scsi host0: ahci Nov 1 01:00:30.705966 kernel: scsi host1: ahci Nov 1 01:00:30.700003 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:00:30.750523 kernel: scsi host2: ahci Nov 1 01:00:30.750623 kernel: scsi host3: ahci Nov 1 01:00:30.750688 kernel: scsi host4: ahci Nov 1 01:00:30.731327 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:00:30.793343 kernel: scsi host5: ahci Nov 1 01:00:30.793429 kernel: scsi host6: ahci Nov 1 01:00:30.793497 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Nov 1 01:00:30.759948 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 01:00:30.946707 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Nov 1 01:00:30.946720 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Nov 1 01:00:30.946727 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Nov 1 01:00:30.946735 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Nov 1 01:00:30.946744 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Nov 1 01:00:30.946752 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Nov 1 01:00:30.946763 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:00:30.946875 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 1 01:00:30.946944 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 1 01:00:30.947009 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:00:30.914260 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:00:31.146655 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 1 01:00:31.146865 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 1 01:00:31.147001 kernel: hub 1-0:1.0: USB hub found Nov 1 01:00:31.147169 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 1 01:00:31.147198 kernel: hub 1-0:1.0: 16 ports detected Nov 1 01:00:31.147352 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 1 01:00:31.147363 kernel: igb 0000:03:00.0: added PHC on eth0 Nov 1 01:00:31.147511 kernel: hub 2-0:1.0: USB hub found Nov 1 01:00:31.147701 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:00:31.147839 kernel: hub 2-0:1.0: 10 ports detected Nov 1 01:00:31.147973 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6b:0b:9c Nov 1 01:00:31.148119 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Nov 1 01:00:31.148289 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 01:00:31.148311 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:00:31.148455 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 01:00:30.914302 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:00:31.507099 kernel: igb 0000:04:00.0: added PHC on eth1 Nov 1 01:00:31.507317 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 1 01:00:31.507327 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:00:31.507394 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 01:00:31.507403 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6b:0b:9d Nov 1 01:00:31.507469 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 01:00:31.507477 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Nov 1 01:00:31.507540 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:00:31.507548 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:00:31.507610 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:00:31.507618 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 1 01:00:31.507691 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:00:31.507701 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:00:31.507709 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:00:31.507716 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Nov 1 01:00:31.507783 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:00:31.507792 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:00:31.507854 kernel: ata2.00: Features: NCQ-prio Nov 1 01:00:31.507862 kernel: ata1.00: Features: NCQ-prio Nov 1 01:00:31.507869 kernel: ata2.00: configured for UDMA/133 Nov 1 01:00:31.507876 kernel: hub 1-14:1.0: USB hub found Nov 1 01:00:31.507944 kernel: ata1.00: configured for UDMA/133 Nov 1 01:00:31.507953 kernel: hub 1-14:1.0: 4 ports detected Nov 1 01:00:31.508014 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:00:31.508081 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:00:31.105148 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:00:31.428704 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 01:00:31.552414 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Nov 1 01:00:31.507157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:00:32.070346 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:00:32.070363 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:00:32.070372 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:00:32.070463 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:00:32.070533 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Nov 1 01:00:32.070595 kernel: sd 0:0:0:0: [sdb] Write Protect is off Nov 1 01:00:32.070656 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 1 01:00:32.070722 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:00:32.070783 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Nov 1 01:00:32.070844 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:00:32.070854 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Nov 1 01:00:32.070927 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 01:00:32.070936 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Nov 1 01:00:32.070996 kernel: sd 1:0:0:0: [sda] Write Protect is off Nov 1 01:00:32.071058 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Nov 1 01:00:32.071125 kernel: GPT:9289727 != 937703087 Nov 1 01:00:32.071133 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 01:00:32.071141 kernel: GPT:9289727 != 937703087 Nov 1 01:00:32.071147 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 01:00:32.071157 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:00:32.071164 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Nov 1 01:00:32.071234 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 1 01:00:32.071299 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:00:32.071360 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Nov 1 01:00:32.071420 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Nov 1 01:00:32.071481 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 1 01:00:32.071591 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:00:32.071600 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 01:00:32.071667 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Nov 1 01:00:32.071733 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Nov 1 01:00:32.071799 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (657) Nov 1 01:00:32.071808 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:00:32.071871 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sdb3 scanned by (udev-worker) (552) Nov 1 01:00:32.071881 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 01:00:31.507192 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:00:32.127315 kernel: usbcore: registered new interface driver usbhid Nov 1 01:00:32.127327 kernel: usbhid: USB HID core driver Nov 1 01:00:32.127335 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 1 01:00:31.552339 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:00:31.677339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:00:32.072159 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:00:32.164146 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Nov 1 01:00:32.190415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:00:32.351295 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 1 01:00:32.351391 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 1 01:00:32.351400 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Nov 1 01:00:32.351472 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 1 01:00:32.351544 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Nov 1 01:00:32.257370 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Nov 1 01:00:32.373454 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 1 01:00:32.390074 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 01:00:32.409353 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 01:00:32.415447 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 01:00:32.463316 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:00:32.439484 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:00:32.503383 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:00:32.503404 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:00:32.503460 disk-uuid[721]: Primary Header is updated. Nov 1 01:00:32.503460 disk-uuid[721]: Secondary Entries is updated. Nov 1 01:00:32.503460 disk-uuid[721]: Secondary Header is updated. Nov 1 01:00:32.523225 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:00:32.529985 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:00:32.575477 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:00:32.575493 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:00:32.618230 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 01:00:32.641284 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Nov 1 01:00:32.669318 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Nov 1 01:00:33.541102 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:00:33.562167 disk-uuid[722]: The operation has completed successfully. Nov 1 01:00:33.571360 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:00:33.606835 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 01:00:33.606889 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 01:00:33.633416 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 01:00:33.678309 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 01:00:33.678377 sh[750]: Success Nov 1 01:00:33.712358 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 01:00:33.730174 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 01:00:33.736065 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 01:00:33.806276 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 01:00:33.806297 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:00:33.835880 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 01:00:33.855940 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 01:00:33.874798 kernel: BTRFS info (device dm-0): using free space tree Nov 1 01:00:33.917253 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 01:00:33.919796 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 01:00:33.929684 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 01:00:33.935446 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 01:00:33.965751 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 01:00:34.007823 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:00:34.007840 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:00:34.021260 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:00:34.065020 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:00:34.065040 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 01:00:34.083617 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 01:00:34.108224 kernel: BTRFS info (device sdb6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:00:34.110695 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 01:00:34.133576 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 01:00:34.143454 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:00:34.179421 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 01:00:34.191160 systemd-networkd[934]: lo: Link UP Nov 1 01:00:34.197310 ignition[926]: Ignition 2.19.0 Nov 1 01:00:34.191163 systemd-networkd[934]: lo: Gained carrier Nov 1 01:00:34.197314 ignition[926]: Stage: fetch-offline Nov 1 01:00:34.193786 systemd-networkd[934]: Enumeration completed Nov 1 01:00:34.197336 ignition[926]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:00:34.193866 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 01:00:34.197345 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:00:34.194524 systemd-networkd[934]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:00:34.197403 ignition[926]: parsed url from cmdline: "" Nov 1 01:00:34.199422 unknown[926]: fetched base config from "system" Nov 1 01:00:34.197405 ignition[926]: no config URL provided Nov 1 01:00:34.199426 unknown[926]: fetched user config from "system" Nov 1 01:00:34.197408 ignition[926]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 01:00:34.208657 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:00:34.197432 ignition[926]: parsing config with SHA512: 50710c6d71bb93b2bef4b9fc8af9872457f8e4ac73a62fd42a21cd74dc20a7f8e104433f91adce67c7a653b28baa4702b8f09391afe34a8f9710059aba10e4e7 Nov 1 01:00:34.222199 systemd-networkd[934]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:00:34.199644 ignition[926]: fetch-offline: fetch-offline passed Nov 1 01:00:34.227622 systemd[1]: Reached target network.target - Network. Nov 1 01:00:34.199647 ignition[926]: POST message to Packet Timeline Nov 1 01:00:34.233412 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 01:00:34.199649 ignition[926]: POST Status error: resource requires networking Nov 1 01:00:34.249462 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 01:00:34.199690 ignition[926]: Ignition finished successfully Nov 1 01:00:34.250493 systemd-networkd[934]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:00:34.259244 ignition[947]: Ignition 2.19.0 Nov 1 01:00:34.259248 ignition[947]: Stage: kargs Nov 1 01:00:34.259361 ignition[947]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:00:34.259367 ignition[947]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:00:34.487394 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 01:00:34.478997 systemd-networkd[934]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:00:34.259902 ignition[947]: kargs: kargs passed Nov 1 01:00:34.259905 ignition[947]: POST message to Packet Timeline Nov 1 01:00:34.259914 ignition[947]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:00:34.260497 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60561->[::1]:53: read: connection refused Nov 1 01:00:34.460561 ignition[947]: GET https://metadata.packet.net/metadata: attempt #2 Nov 1 01:00:34.461024 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54185->[::1]:53: read: connection refused Nov 1 01:00:34.757360 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:00:34.758798 systemd-networkd[934]: eno1: Link UP Nov 1 01:00:34.758961 systemd-networkd[934]: eno2: Link UP Nov 1 01:00:34.759111 systemd-networkd[934]: enp1s0f0np0: Link UP Nov 1 01:00:34.759304 systemd-networkd[934]: enp1s0f0np0: Gained carrier Nov 1 01:00:34.767398 systemd-networkd[934]: enp1s0f1np1: Link UP Nov 1 01:00:34.787314 systemd-networkd[934]: enp1s0f0np0: DHCPv4 address 145.40.82.59/31, gateway 145.40.82.58 acquired from 145.40.83.140 Nov 1 01:00:34.861337 ignition[947]: GET https://metadata.packet.net/metadata: attempt #3 Nov 1 01:00:34.862471 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46964->[::1]:53: read: connection refused Nov 1 01:00:35.521980 systemd-networkd[934]: enp1s0f1np1: Gained carrier Nov 1 01:00:35.662932 ignition[947]: GET https://metadata.packet.net/metadata: attempt #4 Nov 1 01:00:35.664098 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46304->[::1]:53: read: connection refused Nov 1 01:00:36.225857 systemd-networkd[934]: enp1s0f0np0: Gained IPv6LL Nov 1 01:00:37.057727 systemd-networkd[934]: enp1s0f1np1: Gained IPv6LL Nov 1 01:00:37.265341 ignition[947]: GET https://metadata.packet.net/metadata: attempt #5 Nov 1 01:00:37.266520 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57329->[::1]:53: read: connection refused Nov 1 01:00:40.469006 ignition[947]: GET https://metadata.packet.net/metadata: attempt #6 Nov 1 01:00:42.637128 ignition[947]: GET result: OK Nov 1 01:00:43.184048 ignition[947]: Ignition finished successfully Nov 1 01:00:43.189181 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 01:00:43.226643 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 01:00:43.233886 ignition[967]: Ignition 2.19.0 Nov 1 01:00:43.233891 ignition[967]: Stage: disks Nov 1 01:00:43.234013 ignition[967]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:00:43.234020 ignition[967]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:00:43.234649 ignition[967]: disks: disks passed Nov 1 01:00:43.234652 ignition[967]: POST message to Packet Timeline Nov 1 01:00:43.234662 ignition[967]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:00:47.088550 ignition[967]: GET result: OK Nov 1 01:00:47.483837 ignition[967]: Ignition finished successfully Nov 1 01:00:47.487699 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 01:00:47.503519 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 01:00:47.521467 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 01:00:47.542643 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 01:00:47.564544 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 01:00:47.584533 systemd[1]: Reached target basic.target - Basic System. Nov 1 01:00:47.618480 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 01:00:47.655806 systemd-fsck[986]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 01:00:47.665694 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 01:00:47.688496 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 01:00:47.795222 kernel: EXT4-fs (sdb9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 01:00:47.795571 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 01:00:47.805658 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 01:00:47.841395 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:00:47.850151 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 01:00:47.976984 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (996) Nov 1 01:00:47.976997 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:00:47.977005 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:00:47.977012 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:00:47.977019 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:00:47.977026 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 01:00:47.892879 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 1 01:00:47.977389 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Nov 1 01:00:48.000412 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 01:00:48.000431 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:00:48.069421 coreos-metadata[998]: Nov 01 01:00:48.040 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:00:48.033467 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:00:48.109331 coreos-metadata[1014]: Nov 01 01:00:48.044 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:00:48.059500 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 01:00:48.084461 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 01:00:48.140365 initrd-setup-root[1028]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 01:00:48.150350 initrd-setup-root[1035]: cut: /sysroot/etc/group: No such file or directory Nov 1 01:00:48.161465 initrd-setup-root[1042]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 01:00:48.171345 initrd-setup-root[1049]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 01:00:48.191128 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 01:00:48.212434 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 01:00:48.212962 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 01:00:48.267462 kernel: BTRFS info (device sdb6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:00:48.234808 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 01:00:48.277488 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 01:00:48.293534 ignition[1116]: INFO : Ignition 2.19.0 Nov 1 01:00:48.293534 ignition[1116]: INFO : Stage: mount Nov 1 01:00:48.293534 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:00:48.293534 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:00:48.293534 ignition[1116]: INFO : mount: mount passed Nov 1 01:00:48.293534 ignition[1116]: INFO : POST message to Packet Timeline Nov 1 01:00:48.293534 ignition[1116]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:00:49.747714 coreos-metadata[1014]: Nov 01 01:00:49.747 INFO Fetch successful Nov 1 01:00:49.827419 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 1 01:00:49.827477 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Nov 1 01:00:50.230067 ignition[1116]: INFO : GET result: OK Nov 1 01:00:50.578527 coreos-metadata[998]: Nov 01 01:00:50.578 INFO Fetch successful Nov 1 01:00:50.611813 coreos-metadata[998]: Nov 01 01:00:50.611 INFO wrote hostname ci-4081.3.6-n-13ad226fb7 to /sysroot/etc/hostname Nov 1 01:00:50.613195 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 01:00:53.383131 ignition[1116]: INFO : Ignition finished successfully Nov 1 01:00:53.384033 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 01:00:53.418519 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 01:00:53.429183 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:00:53.494279 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by mount (1140) Nov 1 01:00:53.524992 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:00:53.525008 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:00:53.543577 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:00:53.583134 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:00:53.583152 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 01:00:53.596447 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:00:53.630081 ignition[1158]: INFO : Ignition 2.19.0 Nov 1 01:00:53.630081 ignition[1158]: INFO : Stage: files Nov 1 01:00:53.644463 ignition[1158]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:00:53.644463 ignition[1158]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:00:53.644463 ignition[1158]: DEBUG : files: compiled without relabeling support, skipping Nov 1 01:00:53.644463 ignition[1158]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 01:00:53.644463 ignition[1158]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 01:00:53.644463 ignition[1158]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 01:00:53.644463 ignition[1158]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 01:00:53.644463 ignition[1158]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 01:00:53.644463 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 01:00:53.644463 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 01:00:53.634150 unknown[1158]: wrote ssh authorized keys file for user: core Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 01:00:54.026588 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 1 01:00:54.243730 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 01:00:54.587326 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 01:00:54.587326 ignition[1158]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: files passed Nov 1 01:00:54.616463 ignition[1158]: INFO : POST message to Packet Timeline Nov 1 01:00:54.616463 ignition[1158]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:00:55.880915 ignition[1158]: INFO : GET result: OK Nov 1 01:00:57.240129 ignition[1158]: INFO : Ignition finished successfully Nov 1 01:00:57.243875 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 01:00:57.279499 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 01:00:57.289854 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 01:00:57.299652 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 01:00:57.299714 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 01:00:57.349666 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:00:57.361732 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 01:00:57.394450 initrd-setup-root-after-ignition[1197]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:00:57.394450 initrd-setup-root-after-ignition[1197]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:00:57.409457 initrd-setup-root-after-ignition[1201]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:00:57.399488 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 01:00:57.486580 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 01:00:57.486848 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 01:00:57.508413 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 01:00:57.528508 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 01:00:57.548639 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 01:00:57.567599 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 01:00:57.636988 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:00:57.661700 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 01:00:57.712891 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:00:57.724488 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:00:57.745559 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 01:00:57.764837 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 01:00:57.765270 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:00:57.792976 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 01:00:57.814946 systemd[1]: Stopped target basic.target - Basic System. Nov 1 01:00:57.833841 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 01:00:57.852852 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:00:57.873946 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 01:00:57.894869 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 01:00:57.915834 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:00:57.936975 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 01:00:57.957865 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 01:00:57.977840 systemd[1]: Stopped target swap.target - Swaps. Nov 1 01:00:57.996721 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 01:00:57.997129 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:00:58.022967 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:00:58.042867 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:00:58.063816 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 01:00:58.064283 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:00:58.085823 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 01:00:58.086247 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 01:00:58.117829 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 01:00:58.118304 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:00:58.138046 systemd[1]: Stopped target paths.target - Path Units. Nov 1 01:00:58.156709 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 01:00:58.157134 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:00:58.177840 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 01:00:58.196841 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 01:00:58.214913 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 01:00:58.215241 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:00:58.234992 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 01:00:58.235353 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:00:58.257950 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 01:00:58.370401 ignition[1221]: INFO : Ignition 2.19.0 Nov 1 01:00:58.370401 ignition[1221]: INFO : Stage: umount Nov 1 01:00:58.370401 ignition[1221]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:00:58.370401 ignition[1221]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:00:58.370401 ignition[1221]: INFO : umount: umount passed Nov 1 01:00:58.370401 ignition[1221]: INFO : POST message to Packet Timeline Nov 1 01:00:58.370401 ignition[1221]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:00:58.258379 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:00:58.277936 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 01:00:58.278346 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 01:00:58.295888 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 01:00:58.296302 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 01:00:58.325349 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 01:00:58.340905 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 01:00:58.359391 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 01:00:58.359516 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:00:58.382719 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 01:00:58.382955 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:00:58.441838 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 01:00:58.446473 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 01:00:58.446725 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 01:00:58.525919 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 01:00:58.526212 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 01:00:59.406148 ignition[1221]: INFO : GET result: OK Nov 1 01:00:59.833039 ignition[1221]: INFO : Ignition finished successfully Nov 1 01:00:59.836159 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 01:00:59.836490 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 01:00:59.854612 systemd[1]: Stopped target network.target - Network. Nov 1 01:00:59.870480 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 01:00:59.870747 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 01:00:59.888657 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 01:00:59.888800 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 01:00:59.906759 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 01:00:59.906920 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 01:00:59.924745 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 01:00:59.924909 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 01:00:59.932986 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 01:00:59.933154 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 01:00:59.950306 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 01:00:59.964387 systemd-networkd[934]: enp1s0f0np0: DHCPv6 lease lost Nov 1 01:00:59.974482 systemd-networkd[934]: enp1s0f1np1: DHCPv6 lease lost Nov 1 01:00:59.976835 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 01:00:59.996418 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 01:00:59.996705 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 01:01:00.015613 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 01:01:00.015984 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 01:01:00.035950 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 01:01:00.036079 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:01:00.072394 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 01:01:00.095381 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 01:01:00.095426 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:01:00.114489 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 01:01:00.114576 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:01:00.132613 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 01:01:00.132766 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 01:01:00.153625 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 01:01:00.153792 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:01:00.174952 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:01:00.197405 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 01:01:00.197777 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:01:00.233387 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 01:01:00.233531 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 01:01:00.235753 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 01:01:00.235853 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:01:00.265606 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 01:01:00.265756 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:01:00.298841 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 01:01:00.299009 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 01:01:00.327787 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:01:00.327949 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:01:00.382369 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 01:01:00.401479 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 01:01:00.401520 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:01:00.429511 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 01:01:00.429596 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:01:00.685421 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Nov 1 01:01:00.451637 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 01:01:00.451783 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:01:00.472507 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:01:00.472650 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:01:00.494496 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 01:01:00.494738 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 01:01:00.543773 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 01:01:00.544057 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 01:01:00.562426 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 01:01:00.600376 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 01:01:00.634653 systemd[1]: Switching root. Nov 1 01:01:00.789421 systemd-journald[267]: Journal stopped Nov 1 01:00:29.033693 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 01:00:29.033708 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:00:29.033715 kernel: BIOS-provided physical RAM map: Nov 1 01:00:29.033719 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 1 01:00:29.033723 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 1 01:00:29.033727 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 1 01:00:29.033732 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 1 01:00:29.033736 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 1 01:00:29.033740 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081a73fff] usable Nov 1 01:00:29.033744 kernel: BIOS-e820: [mem 0x0000000081a74000-0x0000000081a74fff] ACPI NVS Nov 1 01:00:29.033748 kernel: BIOS-e820: [mem 0x0000000081a75000-0x0000000081a75fff] reserved Nov 1 01:00:29.033753 kernel: BIOS-e820: [mem 0x0000000081a76000-0x000000008afcdfff] usable Nov 1 01:00:29.033757 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Nov 1 01:00:29.033761 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Nov 1 01:00:29.033767 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Nov 1 01:00:29.033771 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Nov 1 01:00:29.033777 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 1 01:00:29.033782 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 1 01:00:29.033786 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 01:00:29.033791 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 1 01:00:29.033795 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 1 01:00:29.033800 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 1 01:00:29.033804 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 1 01:00:29.033809 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 1 01:00:29.033814 kernel: NX (Execute Disable) protection: active Nov 1 01:00:29.033818 kernel: APIC: Static calls initialized Nov 1 01:00:29.033823 kernel: SMBIOS 3.2.1 present. Nov 1 01:00:29.033828 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 2.6 12/03/2024 Nov 1 01:00:29.033833 kernel: tsc: Detected 3400.000 MHz processor Nov 1 01:00:29.033838 kernel: tsc: Detected 3399.906 MHz TSC Nov 1 01:00:29.033843 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 01:00:29.033848 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 01:00:29.033853 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 1 01:00:29.033857 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Nov 1 01:00:29.033862 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 01:00:29.033867 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 1 01:00:29.033871 kernel: Using GB pages for direct mapping Nov 1 01:00:29.033877 kernel: ACPI: Early table checksum verification disabled Nov 1 01:00:29.033882 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 1 01:00:29.033886 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 1 01:00:29.033893 kernel: ACPI: FACP 0x000000008C58B670 000114 (v06 01072009 AMI 00010013) Nov 1 01:00:29.033898 kernel: ACPI: DSDT 0x000000008C54F268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 1 01:00:29.033903 kernel: ACPI: FACS 0x000000008C66DF80 000040 Nov 1 01:00:29.033908 kernel: ACPI: APIC 0x000000008C58B788 00012C (v04 01072009 AMI 00010013) Nov 1 01:00:29.033914 kernel: ACPI: FPDT 0x000000008C58B8B8 000044 (v01 01072009 AMI 00010013) Nov 1 01:00:29.033919 kernel: ACPI: FIDT 0x000000008C58B900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 1 01:00:29.033924 kernel: ACPI: MCFG 0x000000008C58B9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 1 01:00:29.033929 kernel: ACPI: SPMI 0x000000008C58B9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 1 01:00:29.033934 kernel: ACPI: SSDT 0x000000008C58BA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 1 01:00:29.033939 kernel: ACPI: SSDT 0x000000008C58D548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 1 01:00:29.033943 kernel: ACPI: SSDT 0x000000008C590710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 1 01:00:29.033949 kernel: ACPI: HPET 0x000000008C592A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:00:29.033954 kernel: ACPI: SSDT 0x000000008C592A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 1 01:00:29.033959 kernel: ACPI: SSDT 0x000000008C593A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 1 01:00:29.033964 kernel: ACPI: UEFI 0x000000008C594320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:00:29.033969 kernel: ACPI: LPIT 0x000000008C594368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:00:29.033974 kernel: ACPI: SSDT 0x000000008C594400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 1 01:00:29.033979 kernel: ACPI: SSDT 0x000000008C596BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 1 01:00:29.033984 kernel: ACPI: DBGP 0x000000008C5980C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:00:29.033989 kernel: ACPI: DBG2 0x000000008C598100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 1 01:00:29.033995 kernel: ACPI: SSDT 0x000000008C598158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 1 01:00:29.034000 kernel: ACPI: DMAR 0x000000008C599CC0 000070 (v01 INTEL EDK2 00000002 01000013) Nov 1 01:00:29.034005 kernel: ACPI: SSDT 0x000000008C599D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 1 01:00:29.034010 kernel: ACPI: TPM2 0x000000008C599E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 1 01:00:29.034015 kernel: ACPI: SSDT 0x000000008C599EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 1 01:00:29.034019 kernel: ACPI: WSMT 0x000000008C59AC40 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 1 01:00:29.034024 kernel: ACPI: EINJ 0x000000008C59AC68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 1 01:00:29.034029 kernel: ACPI: ERST 0x000000008C59AD98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 1 01:00:29.034035 kernel: ACPI: BERT 0x000000008C59AFC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 1 01:00:29.034040 kernel: ACPI: HEST 0x000000008C59AFF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 1 01:00:29.034045 kernel: ACPI: SSDT 0x000000008C59B278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 1 01:00:29.034050 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b670-0x8c58b783] Nov 1 01:00:29.034055 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b66b] Nov 1 01:00:29.034060 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Nov 1 01:00:29.034065 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b788-0x8c58b8b3] Nov 1 01:00:29.034070 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b8b8-0x8c58b8fb] Nov 1 01:00:29.034075 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b900-0x8c58b99b] Nov 1 01:00:29.034081 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b9a0-0x8c58b9db] Nov 1 01:00:29.034086 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b9e0-0x8c58ba20] Nov 1 01:00:29.034090 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58ba28-0x8c58d543] Nov 1 01:00:29.034095 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d548-0x8c59070d] Nov 1 01:00:29.034100 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590710-0x8c592a3a] Nov 1 01:00:29.034105 kernel: ACPI: Reserving HPET table memory at [mem 0x8c592a40-0x8c592a77] Nov 1 01:00:29.034110 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a78-0x8c593a25] Nov 1 01:00:29.034115 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593a28-0x8c59431b] Nov 1 01:00:29.034120 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c594320-0x8c594361] Nov 1 01:00:29.034126 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c594368-0x8c5943fb] Nov 1 01:00:29.034131 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594400-0x8c596bdd] Nov 1 01:00:29.034136 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596be0-0x8c5980c1] Nov 1 01:00:29.034141 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5980c8-0x8c5980fb] Nov 1 01:00:29.034145 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598100-0x8c598153] Nov 1 01:00:29.034150 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598158-0x8c599cbe] Nov 1 01:00:29.034155 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599cc0-0x8c599d2f] Nov 1 01:00:29.034160 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599d30-0x8c599e73] Nov 1 01:00:29.034165 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599e78-0x8c599eab] Nov 1 01:00:29.034171 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599eb0-0x8c59ac3e] Nov 1 01:00:29.034176 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59ac40-0x8c59ac67] Nov 1 01:00:29.034180 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59ac68-0x8c59ad97] Nov 1 01:00:29.034185 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad98-0x8c59afc7] Nov 1 01:00:29.034190 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59afc8-0x8c59aff7] Nov 1 01:00:29.034195 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59aff8-0x8c59b273] Nov 1 01:00:29.034200 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b278-0x8c59b3d9] Nov 1 01:00:29.034205 kernel: No NUMA configuration found Nov 1 01:00:29.034210 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 1 01:00:29.034215 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 1 01:00:29.034224 kernel: Zone ranges: Nov 1 01:00:29.034229 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 01:00:29.034234 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 01:00:29.034239 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:00:29.034244 kernel: Movable zone start for each node Nov 1 01:00:29.034249 kernel: Early memory node ranges Nov 1 01:00:29.034253 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 1 01:00:29.034258 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 1 01:00:29.034263 kernel: node 0: [mem 0x0000000040400000-0x0000000081a73fff] Nov 1 01:00:29.034269 kernel: node 0: [mem 0x0000000081a76000-0x000000008afcdfff] Nov 1 01:00:29.034274 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Nov 1 01:00:29.034279 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 1 01:00:29.034284 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 1 01:00:29.034294 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 1 01:00:29.034299 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 01:00:29.034304 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 1 01:00:29.034309 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 1 01:00:29.034316 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 1 01:00:29.034321 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 1 01:00:29.034326 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Nov 1 01:00:29.034331 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 1 01:00:29.034337 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 1 01:00:29.034342 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 1 01:00:29.034347 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 1 01:00:29.034353 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 1 01:00:29.034358 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 1 01:00:29.034364 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 1 01:00:29.034369 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 1 01:00:29.034375 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 1 01:00:29.034380 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 1 01:00:29.034385 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 1 01:00:29.034390 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 1 01:00:29.034396 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 1 01:00:29.034401 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 1 01:00:29.034406 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 1 01:00:29.034412 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 1 01:00:29.034417 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 1 01:00:29.034423 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 1 01:00:29.034428 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 1 01:00:29.034433 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 1 01:00:29.034438 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 01:00:29.034444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 01:00:29.034449 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 01:00:29.034454 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 01:00:29.034460 kernel: TSC deadline timer available Nov 1 01:00:29.034466 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 1 01:00:29.034471 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 1 01:00:29.034477 kernel: Booting paravirtualized kernel on bare hardware Nov 1 01:00:29.034482 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 01:00:29.034487 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 1 01:00:29.034493 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 1 01:00:29.034498 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 1 01:00:29.034503 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 01:00:29.034510 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:00:29.034515 kernel: random: crng init done Nov 1 01:00:29.034521 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 1 01:00:29.034526 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 1 01:00:29.034531 kernel: Fallback order for Node 0: 0 Nov 1 01:00:29.034537 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Nov 1 01:00:29.034542 kernel: Policy zone: Normal Nov 1 01:00:29.034547 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 01:00:29.034552 kernel: software IO TLB: area num 16. Nov 1 01:00:29.034559 kernel: Memory: 32720312K/33452984K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 732412K reserved, 0K cma-reserved) Nov 1 01:00:29.034564 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 01:00:29.034570 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 01:00:29.034575 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 01:00:29.034580 kernel: Dynamic Preempt: voluntary Nov 1 01:00:29.034586 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 01:00:29.034591 kernel: rcu: RCU event tracing is enabled. Nov 1 01:00:29.034597 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 01:00:29.034602 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 01:00:29.034608 kernel: Rude variant of Tasks RCU enabled. Nov 1 01:00:29.034613 kernel: Tracing variant of Tasks RCU enabled. Nov 1 01:00:29.034619 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 01:00:29.034624 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 01:00:29.034629 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 1 01:00:29.034635 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 01:00:29.034640 kernel: Console: colour dummy device 80x25 Nov 1 01:00:29.034645 kernel: printk: console [tty0] enabled Nov 1 01:00:29.034650 kernel: printk: console [ttyS1] enabled Nov 1 01:00:29.034657 kernel: ACPI: Core revision 20230628 Nov 1 01:00:29.034662 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Nov 1 01:00:29.034667 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 01:00:29.034673 kernel: DMAR: Host address width 39 Nov 1 01:00:29.034678 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 1 01:00:29.034683 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 1 01:00:29.034689 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Nov 1 01:00:29.034694 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 1 01:00:29.034699 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 1 01:00:29.034705 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 1 01:00:29.034711 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 1 01:00:29.034716 kernel: x2apic enabled Nov 1 01:00:29.034721 kernel: APIC: Switched APIC routing to: cluster x2apic Nov 1 01:00:29.034727 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 1 01:00:29.034732 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 1 01:00:29.034738 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 1 01:00:29.034743 kernel: process: using mwait in idle threads Nov 1 01:00:29.034748 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 01:00:29.034754 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 01:00:29.034759 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 01:00:29.034765 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 01:00:29.034770 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 01:00:29.034776 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 01:00:29.034781 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 01:00:29.034786 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 01:00:29.034791 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 01:00:29.034796 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 01:00:29.034802 kernel: TAA: Mitigation: TSX disabled Nov 1 01:00:29.034807 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 1 01:00:29.034812 kernel: SRBDS: Mitigation: Microcode Nov 1 01:00:29.034818 kernel: GDS: Mitigation: Microcode Nov 1 01:00:29.034824 kernel: active return thunk: its_return_thunk Nov 1 01:00:29.034829 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 01:00:29.034834 kernel: VMSCAPE: Mitigation: IBPB before exit to userspace Nov 1 01:00:29.034839 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 01:00:29.034845 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 01:00:29.034850 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 01:00:29.034855 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 01:00:29.034860 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 01:00:29.034865 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 01:00:29.034871 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 01:00:29.034877 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 01:00:29.034882 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 1 01:00:29.034887 kernel: Freeing SMP alternatives memory: 32K Nov 1 01:00:29.034893 kernel: pid_max: default: 32768 minimum: 301 Nov 1 01:00:29.034898 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 01:00:29.034903 kernel: landlock: Up and running. Nov 1 01:00:29.034908 kernel: SELinux: Initializing. Nov 1 01:00:29.034914 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:00:29.034919 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 01:00:29.034924 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 1 01:00:29.034930 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:00:29.034936 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:00:29.034942 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:00:29.034947 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 1 01:00:29.034952 kernel: ... version: 4 Nov 1 01:00:29.034958 kernel: ... bit width: 48 Nov 1 01:00:29.034963 kernel: ... generic registers: 4 Nov 1 01:00:29.034968 kernel: ... value mask: 0000ffffffffffff Nov 1 01:00:29.034974 kernel: ... max period: 00007fffffffffff Nov 1 01:00:29.034979 kernel: ... fixed-purpose events: 3 Nov 1 01:00:29.034985 kernel: ... event mask: 000000070000000f Nov 1 01:00:29.034990 kernel: signal: max sigframe size: 2032 Nov 1 01:00:29.034996 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 1 01:00:29.035001 kernel: rcu: Hierarchical SRCU implementation. Nov 1 01:00:29.035006 kernel: rcu: Max phase no-delay instances is 400. Nov 1 01:00:29.035012 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 1 01:00:29.035017 kernel: smp: Bringing up secondary CPUs ... Nov 1 01:00:29.035022 kernel: smpboot: x86: Booting SMP configuration: Nov 1 01:00:29.035027 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Nov 1 01:00:29.035034 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 01:00:29.035039 kernel: smp: Brought up 1 node, 16 CPUs Nov 1 01:00:29.035045 kernel: smpboot: Max logical packages: 1 Nov 1 01:00:29.035050 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 1 01:00:29.035055 kernel: devtmpfs: initialized Nov 1 01:00:29.035060 kernel: x86/mm: Memory block size: 128MB Nov 1 01:00:29.035066 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81a74000-0x81a74fff] (4096 bytes) Nov 1 01:00:29.035071 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Nov 1 01:00:29.035076 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 01:00:29.035083 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 01:00:29.035088 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 01:00:29.035093 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 01:00:29.035099 kernel: audit: initializing netlink subsys (disabled) Nov 1 01:00:29.035104 kernel: audit: type=2000 audit(1761958823.039:1): state=initialized audit_enabled=0 res=1 Nov 1 01:00:29.035109 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 01:00:29.035114 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 01:00:29.035120 kernel: cpuidle: using governor menu Nov 1 01:00:29.035126 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 01:00:29.035131 kernel: dca service started, version 1.12.1 Nov 1 01:00:29.035136 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 1 01:00:29.035142 kernel: PCI: Using configuration type 1 for base access Nov 1 01:00:29.035147 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 1 01:00:29.035152 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 01:00:29.035158 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 01:00:29.035163 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 01:00:29.035168 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 01:00:29.035174 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 01:00:29.035180 kernel: ACPI: Added _OSI(Module Device) Nov 1 01:00:29.035185 kernel: ACPI: Added _OSI(Processor Device) Nov 1 01:00:29.035190 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 01:00:29.035196 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 1 01:00:29.035201 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:00:29.035206 kernel: ACPI: SSDT 0xFFFF93C601B31C00 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 1 01:00:29.035212 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:00:29.035217 kernel: ACPI: SSDT 0xFFFF93C601B28000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 1 01:00:29.035225 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:00:29.035231 kernel: ACPI: SSDT 0xFFFF93C600246700 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 1 01:00:29.035236 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:00:29.035241 kernel: ACPI: SSDT 0xFFFF93C601E5D800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 1 01:00:29.035246 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:00:29.035252 kernel: ACPI: SSDT 0xFFFF93C60012A000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 1 01:00:29.035257 kernel: ACPI: Dynamic OEM Table Load: Nov 1 01:00:29.035262 kernel: ACPI: SSDT 0xFFFF93C601B30800 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 1 01:00:29.035267 kernel: ACPI: _OSC evaluated successfully for all CPUs Nov 1 01:00:29.035273 kernel: ACPI: Interpreter enabled Nov 1 01:00:29.035279 kernel: ACPI: PM: (supports S0 S5) Nov 1 01:00:29.035284 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 01:00:29.035290 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 1 01:00:29.035295 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 1 01:00:29.035300 kernel: HEST: Table parsing has been initialized. Nov 1 01:00:29.035305 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 1 01:00:29.035311 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 01:00:29.035316 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 1 01:00:29.035321 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 1 01:00:29.035328 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Nov 1 01:00:29.035333 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Nov 1 01:00:29.035338 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Nov 1 01:00:29.035344 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Nov 1 01:00:29.035349 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Nov 1 01:00:29.035354 kernel: ACPI: \_TZ_.FN00: New power resource Nov 1 01:00:29.035360 kernel: ACPI: \_TZ_.FN01: New power resource Nov 1 01:00:29.035365 kernel: ACPI: \_TZ_.FN02: New power resource Nov 1 01:00:29.035370 kernel: ACPI: \_TZ_.FN03: New power resource Nov 1 01:00:29.035377 kernel: ACPI: \_TZ_.FN04: New power resource Nov 1 01:00:29.035382 kernel: ACPI: \PIN_: New power resource Nov 1 01:00:29.035387 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 1 01:00:29.035463 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 01:00:29.035517 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 1 01:00:29.035567 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 1 01:00:29.035575 kernel: PCI host bridge to bus 0000:00 Nov 1 01:00:29.035628 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 01:00:29.035673 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 01:00:29.035716 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 01:00:29.035759 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 1 01:00:29.035800 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 1 01:00:29.035843 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 1 01:00:29.035900 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 1 01:00:29.035961 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 1 01:00:29.036011 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.036065 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 1 01:00:29.036114 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 1 01:00:29.036168 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 1 01:00:29.036217 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 1 01:00:29.036277 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 1 01:00:29.036327 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 1 01:00:29.036375 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 1 01:00:29.036427 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 1 01:00:29.036475 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 1 01:00:29.036524 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 1 01:00:29.036578 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 1 01:00:29.036627 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:00:29.036682 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 1 01:00:29.036732 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:00:29.036784 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 1 01:00:29.036834 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 1 01:00:29.036884 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 1 01:00:29.036936 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 1 01:00:29.036993 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 1 01:00:29.037044 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 1 01:00:29.037096 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 1 01:00:29.037145 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 1 01:00:29.037194 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 1 01:00:29.037277 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 1 01:00:29.037342 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 1 01:00:29.037390 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 1 01:00:29.037438 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 1 01:00:29.037486 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 1 01:00:29.037534 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 1 01:00:29.037585 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 1 01:00:29.037633 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 1 01:00:29.037685 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 1 01:00:29.037735 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.037793 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 1 01:00:29.037846 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.037900 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 1 01:00:29.037949 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.038003 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 1 01:00:29.038052 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.038106 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Nov 1 01:00:29.038157 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.038210 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 1 01:00:29.038302 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 01:00:29.038355 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 1 01:00:29.038408 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 1 01:00:29.038458 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 1 01:00:29.038509 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 1 01:00:29.038564 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 1 01:00:29.038613 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 1 01:00:29.038669 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Nov 1 01:00:29.038720 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 1 01:00:29.038770 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 1 01:00:29.038820 kernel: pci 0000:01:00.0: PME# supported from D3cold Nov 1 01:00:29.038872 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:00:29.038923 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:00:29.038977 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Nov 1 01:00:29.039028 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 1 01:00:29.039078 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 1 01:00:29.039128 kernel: pci 0000:01:00.1: PME# supported from D3cold Nov 1 01:00:29.039179 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 01:00:29.039257 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 01:00:29.039322 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:00:29.039371 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:00:29.039421 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:00:29.039471 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:00:29.039527 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Nov 1 01:00:29.039577 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:00:29.039631 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 1 01:00:29.039682 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Nov 1 01:00:29.039731 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 1 01:00:29.039782 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.039831 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:00:29.039881 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:00:29.039929 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:00:29.039987 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 1 01:00:29.040037 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 1 01:00:29.040088 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 1 01:00:29.040138 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Nov 1 01:00:29.040188 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 1 01:00:29.040268 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 1 01:00:29.040331 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:00:29.040382 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:00:29.040433 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:00:29.040483 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:00:29.040539 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Nov 1 01:00:29.040589 kernel: pci 0000:06:00.0: enabling Extended Tags Nov 1 01:00:29.040685 kernel: pci 0000:06:00.0: supports D1 D2 Nov 1 01:00:29.040738 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:00:29.040789 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:00:29.040842 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:00:29.040892 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:00:29.040947 kernel: pci_bus 0000:07: extended config space not accessible Nov 1 01:00:29.041005 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Nov 1 01:00:29.041058 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 1 01:00:29.041112 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 1 01:00:29.041167 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Nov 1 01:00:29.041239 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 01:00:29.041310 kernel: pci 0000:07:00.0: supports D1 D2 Nov 1 01:00:29.041363 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 01:00:29.041412 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:00:29.041465 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:00:29.041516 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:00:29.041524 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 1 01:00:29.041530 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 1 01:00:29.041537 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 1 01:00:29.041543 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 1 01:00:29.041549 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 1 01:00:29.041554 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 1 01:00:29.041560 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 1 01:00:29.041566 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 1 01:00:29.041571 kernel: iommu: Default domain type: Translated Nov 1 01:00:29.041577 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 01:00:29.041583 kernel: PCI: Using ACPI for IRQ routing Nov 1 01:00:29.041589 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 01:00:29.041595 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 1 01:00:29.041600 kernel: e820: reserve RAM buffer [mem 0x81a74000-0x83ffffff] Nov 1 01:00:29.041606 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Nov 1 01:00:29.041611 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Nov 1 01:00:29.041617 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 1 01:00:29.041622 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 1 01:00:29.041673 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Nov 1 01:00:29.041725 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Nov 1 01:00:29.041781 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 01:00:29.041789 kernel: vgaarb: loaded Nov 1 01:00:29.041795 kernel: clocksource: Switched to clocksource tsc-early Nov 1 01:00:29.041801 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 01:00:29.041806 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 01:00:29.041812 kernel: pnp: PnP ACPI init Nov 1 01:00:29.041862 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 1 01:00:29.041912 kernel: pnp 00:02: [dma 0 disabled] Nov 1 01:00:29.041965 kernel: pnp 00:03: [dma 0 disabled] Nov 1 01:00:29.042016 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 1 01:00:29.042062 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 1 01:00:29.042110 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Nov 1 01:00:29.042155 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Nov 1 01:00:29.042201 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Nov 1 01:00:29.042281 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Nov 1 01:00:29.042327 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 1 01:00:29.042374 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 1 01:00:29.042420 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 1 01:00:29.042464 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 1 01:00:29.042513 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Nov 1 01:00:29.042559 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 1 01:00:29.042606 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 1 01:00:29.042651 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 1 01:00:29.042694 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 1 01:00:29.042739 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 1 01:00:29.042783 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Nov 1 01:00:29.042832 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Nov 1 01:00:29.042840 kernel: pnp: PnP ACPI: found 9 devices Nov 1 01:00:29.042848 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 01:00:29.042854 kernel: NET: Registered PF_INET protocol family Nov 1 01:00:29.042859 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:00:29.042865 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 01:00:29.042871 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 01:00:29.042877 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:00:29.042883 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 1 01:00:29.042888 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 1 01:00:29.042894 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:00:29.042900 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:00:29.042906 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 01:00:29.042912 kernel: NET: Registered PF_XDP protocol family Nov 1 01:00:29.042961 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 1 01:00:29.043011 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 1 01:00:29.043060 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 1 01:00:29.043111 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:00:29.043161 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:00:29.043215 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 01:00:29.043305 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 01:00:29.043354 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 01:00:29.043403 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Nov 1 01:00:29.043452 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:00:29.043501 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Nov 1 01:00:29.043552 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Nov 1 01:00:29.043602 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 01:00:29.043650 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 01:00:29.043699 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Nov 1 01:00:29.043746 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 01:00:29.043795 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 01:00:29.043843 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Nov 1 01:00:29.043895 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Nov 1 01:00:29.043945 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Nov 1 01:00:29.043995 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:00:29.044045 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Nov 1 01:00:29.044095 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Nov 1 01:00:29.044144 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Nov 1 01:00:29.044188 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 1 01:00:29.044256 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 01:00:29.044312 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 01:00:29.044359 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 01:00:29.044401 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 1 01:00:29.044444 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 1 01:00:29.044492 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Nov 1 01:00:29.044538 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 01:00:29.044587 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Nov 1 01:00:29.044635 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Nov 1 01:00:29.044686 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 1 01:00:29.044732 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Nov 1 01:00:29.044780 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Nov 1 01:00:29.044826 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:00:29.044872 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 1 01:00:29.044919 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 1 01:00:29.044929 kernel: PCI: CLS 64 bytes, default 64 Nov 1 01:00:29.044935 kernel: DMAR: No ATSR found Nov 1 01:00:29.044941 kernel: DMAR: No SATC found Nov 1 01:00:29.044946 kernel: DMAR: dmar0: Using Queued invalidation Nov 1 01:00:29.044995 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 1 01:00:29.045044 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 1 01:00:29.045094 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 1 01:00:29.045143 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 1 01:00:29.045195 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 1 01:00:29.045279 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 1 01:00:29.045328 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 1 01:00:29.045375 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 1 01:00:29.045424 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 1 01:00:29.045471 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 1 01:00:29.045520 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 1 01:00:29.045566 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 1 01:00:29.045616 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 1 01:00:29.045667 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 1 01:00:29.045716 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 1 01:00:29.045764 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 1 01:00:29.045813 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Nov 1 01:00:29.045861 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 1 01:00:29.045909 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 1 01:00:29.045958 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 1 01:00:29.046006 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 1 01:00:29.046059 kernel: pci 0000:01:00.0: Adding to iommu group 1 Nov 1 01:00:29.046109 kernel: pci 0000:01:00.1: Adding to iommu group 1 Nov 1 01:00:29.046159 kernel: pci 0000:03:00.0: Adding to iommu group 15 Nov 1 01:00:29.046209 kernel: pci 0000:04:00.0: Adding to iommu group 16 Nov 1 01:00:29.046301 kernel: pci 0000:06:00.0: Adding to iommu group 17 Nov 1 01:00:29.046354 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 1 01:00:29.046363 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 1 01:00:29.046369 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 01:00:29.046376 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Nov 1 01:00:29.046382 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 1 01:00:29.046388 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 1 01:00:29.046394 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 1 01:00:29.046399 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 1 01:00:29.046450 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 1 01:00:29.046459 kernel: Initialise system trusted keyrings Nov 1 01:00:29.046465 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 1 01:00:29.046472 kernel: Key type asymmetric registered Nov 1 01:00:29.046478 kernel: Asymmetric key parser 'x509' registered Nov 1 01:00:29.046483 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 01:00:29.046489 kernel: io scheduler mq-deadline registered Nov 1 01:00:29.046494 kernel: io scheduler kyber registered Nov 1 01:00:29.046500 kernel: io scheduler bfq registered Nov 1 01:00:29.046549 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 1 01:00:29.046597 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Nov 1 01:00:29.046647 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Nov 1 01:00:29.046697 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Nov 1 01:00:29.046746 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Nov 1 01:00:29.046795 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Nov 1 01:00:29.046849 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 1 01:00:29.046858 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 1 01:00:29.046864 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 1 01:00:29.046870 kernel: pstore: Using crash dump compression: deflate Nov 1 01:00:29.046877 kernel: pstore: Registered erst as persistent store backend Nov 1 01:00:29.046883 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 01:00:29.046889 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 01:00:29.046894 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 01:00:29.046900 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 01:00:29.046906 kernel: hpet_acpi_add: no address or irqs in _CRS Nov 1 01:00:29.046954 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 1 01:00:29.046962 kernel: i8042: PNP: No PS/2 controller found. Nov 1 01:00:29.047007 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 1 01:00:29.047054 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 1 01:00:29.047099 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-01T01:00:27 UTC (1761958827) Nov 1 01:00:29.047145 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 1 01:00:29.047153 kernel: intel_pstate: Intel P-state driver initializing Nov 1 01:00:29.047159 kernel: intel_pstate: Disabling energy efficiency optimization Nov 1 01:00:29.047164 kernel: intel_pstate: HWP enabled Nov 1 01:00:29.047170 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 1 01:00:29.047176 kernel: vesafb: scrolling: redraw Nov 1 01:00:29.047183 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 1 01:00:29.047189 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x000000002b367def, using 768k, total 768k Nov 1 01:00:29.047195 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 01:00:29.047200 kernel: fb0: VESA VGA frame buffer device Nov 1 01:00:29.047206 kernel: NET: Registered PF_INET6 protocol family Nov 1 01:00:29.047212 kernel: Segment Routing with IPv6 Nov 1 01:00:29.047217 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 01:00:29.047225 kernel: NET: Registered PF_PACKET protocol family Nov 1 01:00:29.047231 kernel: Key type dns_resolver registered Nov 1 01:00:29.047257 kernel: microcode: Current revision: 0x00000102 Nov 1 01:00:29.047263 kernel: microcode: Microcode Update Driver: v2.2. Nov 1 01:00:29.047284 kernel: IPI shorthand broadcast: enabled Nov 1 01:00:29.047289 kernel: sched_clock: Marking stable (1561092791, 1369103285)->(4401430732, -1471234656) Nov 1 01:00:29.047295 kernel: registered taskstats version 1 Nov 1 01:00:29.047301 kernel: Loading compiled-in X.509 certificates Nov 1 01:00:29.047307 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 01:00:29.047312 kernel: Key type .fscrypt registered Nov 1 01:00:29.047318 kernel: Key type fscrypt-provisioning registered Nov 1 01:00:29.047324 kernel: ima: Allocated hash algorithm: sha1 Nov 1 01:00:29.047330 kernel: ima: No architecture policies found Nov 1 01:00:29.047335 kernel: clk: Disabling unused clocks Nov 1 01:00:29.047341 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 01:00:29.047347 kernel: Write protecting the kernel read-only data: 36864k Nov 1 01:00:29.047352 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 01:00:29.047358 kernel: Run /init as init process Nov 1 01:00:29.047364 kernel: with arguments: Nov 1 01:00:29.047369 kernel: /init Nov 1 01:00:29.047376 kernel: with environment: Nov 1 01:00:29.047381 kernel: HOME=/ Nov 1 01:00:29.047387 kernel: TERM=linux Nov 1 01:00:29.047394 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 01:00:29.047402 systemd[1]: Detected architecture x86-64. Nov 1 01:00:29.047408 systemd[1]: Running in initrd. Nov 1 01:00:29.047414 systemd[1]: No hostname configured, using default hostname. Nov 1 01:00:29.047420 systemd[1]: Hostname set to . Nov 1 01:00:29.047426 systemd[1]: Initializing machine ID from random generator. Nov 1 01:00:29.047432 systemd[1]: Queued start job for default target initrd.target. Nov 1 01:00:29.047438 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:00:29.047444 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:00:29.047450 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 01:00:29.047456 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 01:00:29.047462 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 01:00:29.047469 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 01:00:29.047475 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 01:00:29.047482 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 01:00:29.047487 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Nov 1 01:00:29.047493 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Nov 1 01:00:29.047499 kernel: clocksource: Switched to clocksource tsc Nov 1 01:00:29.047505 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:00:29.047512 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:00:29.047518 systemd[1]: Reached target paths.target - Path Units. Nov 1 01:00:29.047524 systemd[1]: Reached target slices.target - Slice Units. Nov 1 01:00:29.047529 systemd[1]: Reached target swap.target - Swaps. Nov 1 01:00:29.047535 systemd[1]: Reached target timers.target - Timer Units. Nov 1 01:00:29.047541 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:00:29.047547 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:00:29.047553 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 01:00:29.047559 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 01:00:29.047566 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:00:29.047572 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 01:00:29.047578 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:00:29.047584 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 01:00:29.047590 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 01:00:29.047596 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 01:00:29.047602 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 01:00:29.047607 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 01:00:29.047614 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 01:00:29.047630 systemd-journald[267]: Collecting audit messages is disabled. Nov 1 01:00:29.047645 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 01:00:29.047651 systemd-journald[267]: Journal started Nov 1 01:00:29.047665 systemd-journald[267]: Runtime Journal (/run/log/journal/6233a103c9ef471e82b632159102431c) is 8.0M, max 639.9M, 631.9M free. Nov 1 01:00:29.070641 systemd-modules-load[269]: Inserted module 'overlay' Nov 1 01:00:29.091222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:00:29.111299 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 01:00:29.120650 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 01:00:29.120837 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:00:29.120930 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 01:00:29.166266 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 01:00:29.184834 systemd-modules-load[269]: Inserted module 'br_netfilter' Nov 1 01:00:29.186617 kernel: Bridge firewalling registered Nov 1 01:00:29.186704 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 01:00:29.207853 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 01:00:29.241680 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 01:00:29.251490 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:00:29.272763 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:00:29.293857 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:00:29.339454 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:00:29.339933 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 01:00:29.340359 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 01:00:29.346143 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:00:29.346657 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:00:29.347672 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 01:00:29.350471 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:00:29.361995 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 01:00:29.372745 systemd-resolved[306]: Positive Trust Anchors: Nov 1 01:00:29.372752 systemd-resolved[306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:00:29.372787 systemd-resolved[306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 01:00:29.375164 systemd-resolved[306]: Defaulting to hostname 'linux'. Nov 1 01:00:29.491442 dracut-cmdline[308]: dracut-dracut-053 Nov 1 01:00:29.491442 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:00:29.383459 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 01:00:29.401454 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:00:29.648252 kernel: SCSI subsystem initialized Nov 1 01:00:29.672255 kernel: Loading iSCSI transport class v2.0-870. Nov 1 01:00:29.695251 kernel: iscsi: registered transport (tcp) Nov 1 01:00:29.727556 kernel: iscsi: registered transport (qla4xxx) Nov 1 01:00:29.727574 kernel: QLogic iSCSI HBA Driver Nov 1 01:00:29.760316 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 01:00:29.791505 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 01:00:29.848057 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 01:00:29.848080 kernel: device-mapper: uevent: version 1.0.3 Nov 1 01:00:29.859253 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 01:00:29.926277 kernel: raid6: avx2x4 gen() 53094 MB/s Nov 1 01:00:29.958285 kernel: raid6: avx2x2 gen() 53301 MB/s Nov 1 01:00:29.994555 kernel: raid6: avx2x1 gen() 45106 MB/s Nov 1 01:00:29.994572 kernel: raid6: using algorithm avx2x2 gen() 53301 MB/s Nov 1 01:00:30.041606 kernel: raid6: .... xor() 32589 MB/s, rmw enabled Nov 1 01:00:30.041622 kernel: raid6: using avx2x2 recovery algorithm Nov 1 01:00:30.083271 kernel: xor: automatically using best checksumming function avx Nov 1 01:00:30.200228 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 01:00:30.205835 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:00:30.237608 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:00:30.244554 systemd-udevd[493]: Using default interface naming scheme 'v255'. Nov 1 01:00:30.248343 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:00:30.282478 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 01:00:30.352248 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Nov 1 01:00:30.381095 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:00:30.391479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 01:00:30.479151 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:00:30.513026 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 01:00:30.513065 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 01:00:30.539226 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 01:00:30.544227 kernel: libata version 3.00 loaded. Nov 1 01:00:30.566032 kernel: ACPI: bus type USB registered Nov 1 01:00:30.566050 kernel: usbcore: registered new interface driver usbfs Nov 1 01:00:30.581136 kernel: usbcore: registered new interface driver hub Nov 1 01:00:30.595769 kernel: usbcore: registered new device driver usb Nov 1 01:00:30.608406 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 01:00:30.672958 kernel: PTP clock support registered Nov 1 01:00:30.672974 kernel: ahci 0000:00:17.0: version 3.0 Nov 1 01:00:30.673149 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Nov 1 01:00:30.673247 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 01:00:30.673256 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 1 01:00:30.673321 kernel: AES CTR mode by8 optimization enabled Nov 1 01:00:30.651607 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 01:00:30.705834 kernel: scsi host0: ahci Nov 1 01:00:30.705966 kernel: scsi host1: ahci Nov 1 01:00:30.700003 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:00:30.750523 kernel: scsi host2: ahci Nov 1 01:00:30.750623 kernel: scsi host3: ahci Nov 1 01:00:30.750688 kernel: scsi host4: ahci Nov 1 01:00:30.731327 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:00:30.793343 kernel: scsi host5: ahci Nov 1 01:00:30.793429 kernel: scsi host6: ahci Nov 1 01:00:30.793497 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Nov 1 01:00:30.759948 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 01:00:30.946707 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Nov 1 01:00:30.946720 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Nov 1 01:00:30.946727 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Nov 1 01:00:30.946735 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Nov 1 01:00:30.946744 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Nov 1 01:00:30.946752 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Nov 1 01:00:30.946763 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:00:30.946875 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 1 01:00:30.946944 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 1 01:00:30.947009 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 01:00:30.914260 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:00:31.146655 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 1 01:00:31.146865 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 1 01:00:31.147001 kernel: hub 1-0:1.0: USB hub found Nov 1 01:00:31.147169 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 1 01:00:31.147198 kernel: hub 1-0:1.0: 16 ports detected Nov 1 01:00:31.147352 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 1 01:00:31.147363 kernel: igb 0000:03:00.0: added PHC on eth0 Nov 1 01:00:31.147511 kernel: hub 2-0:1.0: USB hub found Nov 1 01:00:31.147701 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:00:31.147839 kernel: hub 2-0:1.0: 10 ports detected Nov 1 01:00:31.147973 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6b:0b:9c Nov 1 01:00:31.148119 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Nov 1 01:00:31.148289 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 01:00:31.148311 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:00:31.148455 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 01:00:30.914302 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:00:31.507099 kernel: igb 0000:04:00.0: added PHC on eth1 Nov 1 01:00:31.507317 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 1 01:00:31.507327 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 01:00:31.507394 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 01:00:31.507403 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6b:0b:9d Nov 1 01:00:31.507469 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 01:00:31.507477 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Nov 1 01:00:31.507540 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:00:31.507548 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 01:00:31.507610 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 01:00:31.507618 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 1 01:00:31.507691 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:00:31.507701 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 01:00:31.507709 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:00:31.507716 kernel: mlx5_core 0000:01:00.0: firmware version: 14.31.1014 Nov 1 01:00:31.507783 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 01:00:31.507792 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:00:31.507854 kernel: ata2.00: Features: NCQ-prio Nov 1 01:00:31.507862 kernel: ata1.00: Features: NCQ-prio Nov 1 01:00:31.507869 kernel: ata2.00: configured for UDMA/133 Nov 1 01:00:31.507876 kernel: hub 1-14:1.0: USB hub found Nov 1 01:00:31.507944 kernel: ata1.00: configured for UDMA/133 Nov 1 01:00:31.507953 kernel: hub 1-14:1.0: 4 ports detected Nov 1 01:00:31.508014 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:00:31.508081 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 01:00:31.105148 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:00:31.428704 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 01:00:31.552414 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Nov 1 01:00:31.507157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:00:32.070346 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:00:32.070363 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:00:32.070372 kernel: sd 1:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:00:32.070463 kernel: sd 0:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 01:00:32.070533 kernel: sd 0:0:0:0: [sdb] 4096-byte physical blocks Nov 1 01:00:32.070595 kernel: sd 0:0:0:0: [sdb] Write Protect is off Nov 1 01:00:32.070656 kernel: sd 0:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 1 01:00:32.070722 kernel: sd 0:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:00:32.070783 kernel: sd 0:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Nov 1 01:00:32.070844 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:00:32.070854 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Nov 1 01:00:32.070927 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 01:00:32.070936 kernel: sd 1:0:0:0: [sda] 4096-byte physical blocks Nov 1 01:00:32.070996 kernel: sd 1:0:0:0: [sda] Write Protect is off Nov 1 01:00:32.071058 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Nov 1 01:00:32.071125 kernel: GPT:9289727 != 937703087 Nov 1 01:00:32.071133 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 01:00:32.071141 kernel: GPT:9289727 != 937703087 Nov 1 01:00:32.071147 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 01:00:32.071157 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:00:32.071164 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Nov 1 01:00:32.071234 kernel: sd 1:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 1 01:00:32.071299 kernel: sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 01:00:32.071360 kernel: sd 0:0:0:0: [sdb] Attached SCSI disk Nov 1 01:00:32.071420 kernel: sd 1:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Nov 1 01:00:32.071481 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 1 01:00:32.071591 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 01:00:32.071600 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 01:00:32.071667 kernel: sd 1:0:0:0: [sda] Attached SCSI disk Nov 1 01:00:32.071733 kernel: mlx5_core 0000:01:00.1: firmware version: 14.31.1014 Nov 1 01:00:32.071799 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (657) Nov 1 01:00:32.071808 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 01:00:32.071871 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sdb3 scanned by (udev-worker) (552) Nov 1 01:00:32.071881 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 01:00:31.507192 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:00:32.127315 kernel: usbcore: registered new interface driver usbhid Nov 1 01:00:32.127327 kernel: usbhid: USB HID core driver Nov 1 01:00:32.127335 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 1 01:00:31.552339 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:00:31.677339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:00:32.072159 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:00:32.164146 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Nov 1 01:00:32.190415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:00:32.351295 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 1 01:00:32.351391 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 1 01:00:32.351400 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Nov 1 01:00:32.351472 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 1 01:00:32.351544 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Nov 1 01:00:32.257370 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Nov 1 01:00:32.373454 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 1 01:00:32.390074 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 01:00:32.409353 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Nov 1 01:00:32.415447 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 01:00:32.463316 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:00:32.439484 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:00:32.503383 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:00:32.503404 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:00:32.503460 disk-uuid[721]: Primary Header is updated. Nov 1 01:00:32.503460 disk-uuid[721]: Secondary Entries is updated. Nov 1 01:00:32.503460 disk-uuid[721]: Secondary Header is updated. Nov 1 01:00:32.523225 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:00:32.529985 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:00:32.575477 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:00:32.575493 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:00:32.618230 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 01:00:32.641284 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Nov 1 01:00:32.669318 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Nov 1 01:00:33.541102 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 01:00:33.562167 disk-uuid[722]: The operation has completed successfully. Nov 1 01:00:33.571360 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 01:00:33.606835 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 01:00:33.606889 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 01:00:33.633416 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 01:00:33.678309 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 01:00:33.678377 sh[750]: Success Nov 1 01:00:33.712358 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 01:00:33.730174 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 01:00:33.736065 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 01:00:33.806276 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 01:00:33.806297 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:00:33.835880 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 01:00:33.855940 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 01:00:33.874798 kernel: BTRFS info (device dm-0): using free space tree Nov 1 01:00:33.917253 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 01:00:33.919796 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 01:00:33.929684 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 01:00:33.935446 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 01:00:33.965751 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 01:00:34.007823 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:00:34.007840 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:00:34.021260 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:00:34.065020 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:00:34.065040 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 01:00:34.083617 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 01:00:34.108224 kernel: BTRFS info (device sdb6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:00:34.110695 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 01:00:34.133576 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 01:00:34.143454 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:00:34.179421 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 01:00:34.191160 systemd-networkd[934]: lo: Link UP Nov 1 01:00:34.197310 ignition[926]: Ignition 2.19.0 Nov 1 01:00:34.191163 systemd-networkd[934]: lo: Gained carrier Nov 1 01:00:34.197314 ignition[926]: Stage: fetch-offline Nov 1 01:00:34.193786 systemd-networkd[934]: Enumeration completed Nov 1 01:00:34.197336 ignition[926]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:00:34.193866 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 01:00:34.197345 ignition[926]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:00:34.194524 systemd-networkd[934]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:00:34.197403 ignition[926]: parsed url from cmdline: "" Nov 1 01:00:34.199422 unknown[926]: fetched base config from "system" Nov 1 01:00:34.197405 ignition[926]: no config URL provided Nov 1 01:00:34.199426 unknown[926]: fetched user config from "system" Nov 1 01:00:34.197408 ignition[926]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 01:00:34.208657 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:00:34.197432 ignition[926]: parsing config with SHA512: 50710c6d71bb93b2bef4b9fc8af9872457f8e4ac73a62fd42a21cd74dc20a7f8e104433f91adce67c7a653b28baa4702b8f09391afe34a8f9710059aba10e4e7 Nov 1 01:00:34.222199 systemd-networkd[934]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:00:34.199644 ignition[926]: fetch-offline: fetch-offline passed Nov 1 01:00:34.227622 systemd[1]: Reached target network.target - Network. Nov 1 01:00:34.199647 ignition[926]: POST message to Packet Timeline Nov 1 01:00:34.233412 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 01:00:34.199649 ignition[926]: POST Status error: resource requires networking Nov 1 01:00:34.249462 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 01:00:34.199690 ignition[926]: Ignition finished successfully Nov 1 01:00:34.250493 systemd-networkd[934]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:00:34.259244 ignition[947]: Ignition 2.19.0 Nov 1 01:00:34.259248 ignition[947]: Stage: kargs Nov 1 01:00:34.259361 ignition[947]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:00:34.259367 ignition[947]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:00:34.487394 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 01:00:34.478997 systemd-networkd[934]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:00:34.259902 ignition[947]: kargs: kargs passed Nov 1 01:00:34.259905 ignition[947]: POST message to Packet Timeline Nov 1 01:00:34.259914 ignition[947]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:00:34.260497 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60561->[::1]:53: read: connection refused Nov 1 01:00:34.460561 ignition[947]: GET https://metadata.packet.net/metadata: attempt #2 Nov 1 01:00:34.461024 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54185->[::1]:53: read: connection refused Nov 1 01:00:34.757360 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:00:34.758798 systemd-networkd[934]: eno1: Link UP Nov 1 01:00:34.758961 systemd-networkd[934]: eno2: Link UP Nov 1 01:00:34.759111 systemd-networkd[934]: enp1s0f0np0: Link UP Nov 1 01:00:34.759304 systemd-networkd[934]: enp1s0f0np0: Gained carrier Nov 1 01:00:34.767398 systemd-networkd[934]: enp1s0f1np1: Link UP Nov 1 01:00:34.787314 systemd-networkd[934]: enp1s0f0np0: DHCPv4 address 145.40.82.59/31, gateway 145.40.82.58 acquired from 145.40.83.140 Nov 1 01:00:34.861337 ignition[947]: GET https://metadata.packet.net/metadata: attempt #3 Nov 1 01:00:34.862471 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46964->[::1]:53: read: connection refused Nov 1 01:00:35.521980 systemd-networkd[934]: enp1s0f1np1: Gained carrier Nov 1 01:00:35.662932 ignition[947]: GET https://metadata.packet.net/metadata: attempt #4 Nov 1 01:00:35.664098 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:46304->[::1]:53: read: connection refused Nov 1 01:00:36.225857 systemd-networkd[934]: enp1s0f0np0: Gained IPv6LL Nov 1 01:00:37.057727 systemd-networkd[934]: enp1s0f1np1: Gained IPv6LL Nov 1 01:00:37.265341 ignition[947]: GET https://metadata.packet.net/metadata: attempt #5 Nov 1 01:00:37.266520 ignition[947]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:57329->[::1]:53: read: connection refused Nov 1 01:00:40.469006 ignition[947]: GET https://metadata.packet.net/metadata: attempt #6 Nov 1 01:00:42.637128 ignition[947]: GET result: OK Nov 1 01:00:43.184048 ignition[947]: Ignition finished successfully Nov 1 01:00:43.189181 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 01:00:43.226643 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 01:00:43.233886 ignition[967]: Ignition 2.19.0 Nov 1 01:00:43.233891 ignition[967]: Stage: disks Nov 1 01:00:43.234013 ignition[967]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:00:43.234020 ignition[967]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:00:43.234649 ignition[967]: disks: disks passed Nov 1 01:00:43.234652 ignition[967]: POST message to Packet Timeline Nov 1 01:00:43.234662 ignition[967]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:00:47.088550 ignition[967]: GET result: OK Nov 1 01:00:47.483837 ignition[967]: Ignition finished successfully Nov 1 01:00:47.487699 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 01:00:47.503519 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 01:00:47.521467 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 01:00:47.542643 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 01:00:47.564544 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 01:00:47.584533 systemd[1]: Reached target basic.target - Basic System. Nov 1 01:00:47.618480 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 01:00:47.655806 systemd-fsck[986]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 1 01:00:47.665694 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 01:00:47.688496 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 01:00:47.795222 kernel: EXT4-fs (sdb9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 01:00:47.795571 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 01:00:47.805658 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 01:00:47.841395 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:00:47.850151 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 01:00:47.976984 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (996) Nov 1 01:00:47.976997 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:00:47.977005 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:00:47.977012 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:00:47.977019 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:00:47.977026 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 01:00:47.892879 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 1 01:00:47.977389 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Nov 1 01:00:48.000412 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 01:00:48.000431 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:00:48.069421 coreos-metadata[998]: Nov 01 01:00:48.040 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:00:48.033467 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:00:48.109331 coreos-metadata[1014]: Nov 01 01:00:48.044 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:00:48.059500 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 01:00:48.084461 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 01:00:48.140365 initrd-setup-root[1028]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 01:00:48.150350 initrd-setup-root[1035]: cut: /sysroot/etc/group: No such file or directory Nov 1 01:00:48.161465 initrd-setup-root[1042]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 01:00:48.171345 initrd-setup-root[1049]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 01:00:48.191128 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 01:00:48.212434 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 01:00:48.212962 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 01:00:48.267462 kernel: BTRFS info (device sdb6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:00:48.234808 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 01:00:48.277488 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 01:00:48.293534 ignition[1116]: INFO : Ignition 2.19.0 Nov 1 01:00:48.293534 ignition[1116]: INFO : Stage: mount Nov 1 01:00:48.293534 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:00:48.293534 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:00:48.293534 ignition[1116]: INFO : mount: mount passed Nov 1 01:00:48.293534 ignition[1116]: INFO : POST message to Packet Timeline Nov 1 01:00:48.293534 ignition[1116]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:00:49.747714 coreos-metadata[1014]: Nov 01 01:00:49.747 INFO Fetch successful Nov 1 01:00:49.827419 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 1 01:00:49.827477 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Nov 1 01:00:50.230067 ignition[1116]: INFO : GET result: OK Nov 1 01:00:50.578527 coreos-metadata[998]: Nov 01 01:00:50.578 INFO Fetch successful Nov 1 01:00:50.611813 coreos-metadata[998]: Nov 01 01:00:50.611 INFO wrote hostname ci-4081.3.6-n-13ad226fb7 to /sysroot/etc/hostname Nov 1 01:00:50.613195 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 01:00:53.383131 ignition[1116]: INFO : Ignition finished successfully Nov 1 01:00:53.384033 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 01:00:53.418519 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 01:00:53.429183 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:00:53.494279 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by mount (1140) Nov 1 01:00:53.524992 kernel: BTRFS info (device sdb6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:00:53.525008 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:00:53.543577 kernel: BTRFS info (device sdb6): using free space tree Nov 1 01:00:53.583134 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 01:00:53.583152 kernel: BTRFS info (device sdb6): auto enabling async discard Nov 1 01:00:53.596447 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:00:53.630081 ignition[1158]: INFO : Ignition 2.19.0 Nov 1 01:00:53.630081 ignition[1158]: INFO : Stage: files Nov 1 01:00:53.644463 ignition[1158]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:00:53.644463 ignition[1158]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:00:53.644463 ignition[1158]: DEBUG : files: compiled without relabeling support, skipping Nov 1 01:00:53.644463 ignition[1158]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 01:00:53.644463 ignition[1158]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 01:00:53.644463 ignition[1158]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 01:00:53.644463 ignition[1158]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 01:00:53.644463 ignition[1158]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 01:00:53.644463 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 01:00:53.644463 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 01:00:53.634150 unknown[1158]: wrote ssh authorized keys file for user: core Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 01:00:53.777386 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 01:00:54.026588 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 1 01:00:54.243730 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 01:00:54.587326 ignition[1158]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 1 01:00:54.587326 ignition[1158]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:00:54.616463 ignition[1158]: INFO : files: files passed Nov 1 01:00:54.616463 ignition[1158]: INFO : POST message to Packet Timeline Nov 1 01:00:54.616463 ignition[1158]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:00:55.880915 ignition[1158]: INFO : GET result: OK Nov 1 01:00:57.240129 ignition[1158]: INFO : Ignition finished successfully Nov 1 01:00:57.243875 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 01:00:57.279499 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 01:00:57.289854 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 01:00:57.299652 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 01:00:57.299714 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 01:00:57.349666 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:00:57.361732 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 01:00:57.394450 initrd-setup-root-after-ignition[1197]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:00:57.394450 initrd-setup-root-after-ignition[1197]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:00:57.409457 initrd-setup-root-after-ignition[1201]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:00:57.399488 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 01:00:57.486580 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 01:00:57.486848 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 01:00:57.508413 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 01:00:57.528508 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 01:00:57.548639 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 01:00:57.567599 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 01:00:57.636988 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:00:57.661700 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 01:00:57.712891 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:00:57.724488 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:00:57.745559 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 01:00:57.764837 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 01:00:57.765270 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:00:57.792976 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 01:00:57.814946 systemd[1]: Stopped target basic.target - Basic System. Nov 1 01:00:57.833841 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 01:00:57.852852 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:00:57.873946 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 01:00:57.894869 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 01:00:57.915834 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:00:57.936975 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 01:00:57.957865 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 01:00:57.977840 systemd[1]: Stopped target swap.target - Swaps. Nov 1 01:00:57.996721 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 01:00:57.997129 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:00:58.022967 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:00:58.042867 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:00:58.063816 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 01:00:58.064283 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:00:58.085823 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 01:00:58.086247 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 01:00:58.117829 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 01:00:58.118304 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:00:58.138046 systemd[1]: Stopped target paths.target - Path Units. Nov 1 01:00:58.156709 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 01:00:58.157134 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:00:58.177840 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 01:00:58.196841 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 01:00:58.214913 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 01:00:58.215241 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:00:58.234992 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 01:00:58.235353 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:00:58.257950 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 01:00:58.370401 ignition[1221]: INFO : Ignition 2.19.0 Nov 1 01:00:58.370401 ignition[1221]: INFO : Stage: umount Nov 1 01:00:58.370401 ignition[1221]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:00:58.370401 ignition[1221]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 01:00:58.370401 ignition[1221]: INFO : umount: umount passed Nov 1 01:00:58.370401 ignition[1221]: INFO : POST message to Packet Timeline Nov 1 01:00:58.370401 ignition[1221]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 01:00:58.258379 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:00:58.277936 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 01:00:58.278346 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 01:00:58.295888 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 01:00:58.296302 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 01:00:58.325349 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 01:00:58.340905 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 01:00:58.359391 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 01:00:58.359516 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:00:58.382719 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 01:00:58.382955 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:00:58.441838 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 01:00:58.446473 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 01:00:58.446725 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 01:00:58.525919 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 01:00:58.526212 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 01:00:59.406148 ignition[1221]: INFO : GET result: OK Nov 1 01:00:59.833039 ignition[1221]: INFO : Ignition finished successfully Nov 1 01:00:59.836159 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 01:00:59.836490 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 01:00:59.854612 systemd[1]: Stopped target network.target - Network. Nov 1 01:00:59.870480 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 01:00:59.870747 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 01:00:59.888657 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 01:00:59.888800 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 01:00:59.906759 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 01:00:59.906920 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 01:00:59.924745 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 01:00:59.924909 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 01:00:59.932986 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 01:00:59.933154 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 01:00:59.950306 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 01:00:59.964387 systemd-networkd[934]: enp1s0f0np0: DHCPv6 lease lost Nov 1 01:00:59.974482 systemd-networkd[934]: enp1s0f1np1: DHCPv6 lease lost Nov 1 01:00:59.976835 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 01:00:59.996418 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 01:00:59.996705 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 01:01:00.015613 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 01:01:00.015984 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 01:01:00.035950 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 01:01:00.036079 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:01:00.072394 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 01:01:00.095381 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 01:01:00.095426 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:01:00.114489 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 01:01:00.114576 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:01:00.132613 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 01:01:00.132766 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 01:01:00.153625 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 01:01:00.153792 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:01:00.174952 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:01:00.197405 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 01:01:00.197777 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:01:00.233387 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 01:01:00.233531 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 01:01:00.235753 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 01:01:00.235853 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:01:00.265606 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 01:01:00.265756 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:01:00.298841 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 01:01:00.299009 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 01:01:00.327787 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:01:00.327949 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:01:00.382369 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 01:01:00.401479 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 01:01:00.401520 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:01:00.429511 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 01:01:00.429596 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:01:00.685421 systemd-journald[267]: Received SIGTERM from PID 1 (systemd). Nov 1 01:01:00.451637 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 01:01:00.451783 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:01:00.472507 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:01:00.472650 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:01:00.494496 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 01:01:00.494738 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 01:01:00.543773 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 01:01:00.544057 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 01:01:00.562426 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 01:01:00.600376 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 01:01:00.634653 systemd[1]: Switching root. Nov 1 01:01:00.789421 systemd-journald[267]: Journal stopped Nov 1 01:01:03.430990 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 01:01:03.431005 kernel: SELinux: policy capability open_perms=1 Nov 1 01:01:03.431012 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 01:01:03.431019 kernel: SELinux: policy capability always_check_network=0 Nov 1 01:01:03.431024 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 01:01:03.431030 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 01:01:03.431036 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 01:01:03.431041 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 01:01:03.431047 kernel: audit: type=1403 audit(1761958861.008:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 01:01:03.431054 systemd[1]: Successfully loaded SELinux policy in 156.051ms. Nov 1 01:01:03.431062 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.276ms. Nov 1 01:01:03.431069 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 01:01:03.431075 systemd[1]: Detected architecture x86-64. Nov 1 01:01:03.431081 systemd[1]: Detected first boot. Nov 1 01:01:03.431088 systemd[1]: Hostname set to . Nov 1 01:01:03.431095 systemd[1]: Initializing machine ID from random generator. Nov 1 01:01:03.431102 zram_generator::config[1273]: No configuration found. Nov 1 01:01:03.431109 systemd[1]: Populated /etc with preset unit settings. Nov 1 01:01:03.431115 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 01:01:03.431122 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 01:01:03.431128 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 01:01:03.431135 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 01:01:03.431142 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 01:01:03.431149 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 01:01:03.431155 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 01:01:03.431162 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 01:01:03.431169 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 01:01:03.431175 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 01:01:03.431182 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 01:01:03.431190 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:01:03.431196 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:01:03.431203 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 01:01:03.431209 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 01:01:03.431216 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 01:01:03.431225 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 01:01:03.431233 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Nov 1 01:01:03.431240 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:01:03.431248 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 01:01:03.431254 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 01:01:03.431261 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 01:01:03.431269 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 01:01:03.431277 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:01:03.431283 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 01:01:03.431290 systemd[1]: Reached target slices.target - Slice Units. Nov 1 01:01:03.431298 systemd[1]: Reached target swap.target - Swaps. Nov 1 01:01:03.431305 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 01:01:03.431311 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 01:01:03.431318 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:01:03.431325 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 01:01:03.431332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:01:03.431340 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 01:01:03.431347 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 01:01:03.431353 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 01:01:03.431360 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 01:01:03.431367 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:01:03.431374 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 01:01:03.431381 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 01:01:03.431388 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 01:01:03.431396 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 01:01:03.431403 systemd[1]: Reached target machines.target - Containers. Nov 1 01:01:03.431409 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 01:01:03.431416 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:01:03.431423 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 01:01:03.431430 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 01:01:03.431437 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:01:03.431444 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 01:01:03.431451 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:01:03.431458 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 01:01:03.431465 kernel: ACPI: bus type drm_connector registered Nov 1 01:01:03.431471 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:01:03.431478 kernel: fuse: init (API version 7.39) Nov 1 01:01:03.431484 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 01:01:03.431491 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 01:01:03.431498 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 01:01:03.431506 kernel: loop: module loaded Nov 1 01:01:03.431512 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 01:01:03.431519 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 01:01:03.431527 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 01:01:03.431542 systemd-journald[1377]: Collecting audit messages is disabled. Nov 1 01:01:03.431557 systemd-journald[1377]: Journal started Nov 1 01:01:03.431571 systemd-journald[1377]: Runtime Journal (/run/log/journal/9a1528cbffc843869ff6ca016dec50a3) is 8.0M, max 639.9M, 631.9M free. Nov 1 01:01:01.568817 systemd[1]: Queued start job for default target multi-user.target. Nov 1 01:01:01.586456 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Nov 1 01:01:01.586712 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 01:01:03.460279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 01:01:03.493262 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 01:01:03.527300 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 01:01:03.560273 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 01:01:03.593595 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 01:01:03.593626 systemd[1]: Stopped verity-setup.service. Nov 1 01:01:03.656273 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:01:03.677435 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 01:01:03.687833 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 01:01:03.699519 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 01:01:03.709511 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 01:01:03.719503 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 01:01:03.729479 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 01:01:03.739483 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 01:01:03.749588 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 01:01:03.760698 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:01:03.771803 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 01:01:03.772026 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 01:01:03.784141 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:01:03.784519 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:01:03.797184 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:01:03.797593 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 01:01:03.809185 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:01:03.809589 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:01:03.822201 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 01:01:03.822718 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 01:01:03.833196 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:01:03.833722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:01:03.844180 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 01:01:03.855185 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 01:01:03.867137 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 01:01:03.879145 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:01:03.899138 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 01:01:03.921419 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 01:01:03.933276 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 01:01:03.943408 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 01:01:03.943445 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 01:01:03.955006 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 01:01:03.979464 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 01:01:03.991061 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 01:01:04.001482 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:01:04.002754 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 01:01:04.012874 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 01:01:04.024370 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:01:04.024976 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 01:01:04.028208 systemd-journald[1377]: Time spent on flushing to /var/log/journal/9a1528cbffc843869ff6ca016dec50a3 is 13.330ms for 1368 entries. Nov 1 01:01:04.028208 systemd-journald[1377]: System Journal (/var/log/journal/9a1528cbffc843869ff6ca016dec50a3) is 8.0M, max 195.6M, 187.6M free. Nov 1 01:01:04.066085 systemd-journald[1377]: Received client request to flush runtime journal. Nov 1 01:01:04.042388 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 01:01:04.043009 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 01:01:04.051495 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 01:01:04.072049 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 01:01:04.096226 kernel: loop0: detected capacity change from 0 to 229808 Nov 1 01:01:04.108397 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 01:01:04.127313 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 01:01:04.133227 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 01:01:04.142359 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Nov 1 01:01:04.142369 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Nov 1 01:01:04.145429 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 01:01:04.156402 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 01:01:04.167504 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 01:01:04.182796 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 01:01:04.196294 kernel: loop1: detected capacity change from 0 to 142488 Nov 1 01:01:04.206469 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:01:04.216471 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:01:04.230134 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 01:01:04.253535 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 01:01:04.265085 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 01:01:04.284266 kernel: loop2: detected capacity change from 0 to 140768 Nov 1 01:01:04.293904 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 01:01:04.294396 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 01:01:04.305828 udevadm[1413]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 01:01:04.318226 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 01:01:04.338372 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 01:01:04.346581 systemd-tmpfiles[1433]: ACLs are not supported, ignoring. Nov 1 01:01:04.346595 systemd-tmpfiles[1433]: ACLs are not supported, ignoring. Nov 1 01:01:04.349555 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:01:04.373290 kernel: loop3: detected capacity change from 0 to 8 Nov 1 01:01:04.413221 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 01:01:04.422330 kernel: loop4: detected capacity change from 0 to 229808 Nov 1 01:01:04.437771 ldconfig[1403]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 01:01:04.459489 kernel: loop5: detected capacity change from 0 to 142488 Nov 1 01:01:04.459437 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:01:04.471424 systemd-udevd[1439]: Using default interface naming scheme 'v255'. Nov 1 01:01:04.471487 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 01:01:04.487409 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:01:04.491227 kernel: loop6: detected capacity change from 0 to 140768 Nov 1 01:01:04.507028 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Nov 1 01:01:04.509342 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 01:01:04.528231 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 34 scanned by (udev-worker) (1450) Nov 1 01:01:04.528285 kernel: loop7: detected capacity change from 0 to 8 Nov 1 01:01:04.545287 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Nov 1 01:01:04.545917 (sd-merge)[1437]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Nov 1 01:01:04.546288 (sd-merge)[1437]: Merged extensions into '/usr'. Nov 1 01:01:04.567228 kernel: ACPI: button: Sleep Button [SLPB] Nov 1 01:01:04.567291 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 01:01:04.573330 systemd[1]: Reloading requested from client PID 1408 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 01:01:04.573336 systemd[1]: Reloading... Nov 1 01:01:04.605251 kernel: IPMI message handler: version 39.2 Nov 1 01:01:04.605299 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 01:01:04.622231 zram_generator::config[1549]: No configuration found. Nov 1 01:01:04.622293 kernel: ACPI: button: Power Button [PWRF] Nov 1 01:01:04.674231 kernel: ipmi device interface Nov 1 01:01:04.713788 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Nov 1 01:01:04.713935 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Nov 1 01:01:04.743236 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Nov 1 01:01:04.745914 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:01:04.783953 kernel: ipmi_si: IPMI System Interface driver Nov 1 01:01:04.783980 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Nov 1 01:01:04.800853 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Nov 1 01:01:04.802416 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Nov 1 01:01:04.819554 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Nov 1 01:01:04.819584 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Nov 1 01:01:04.836536 systemd[1]: Reloading finished in 262 ms. Nov 1 01:01:04.855841 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Nov 1 01:01:04.875862 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Nov 1 01:01:04.892726 kernel: ipmi_si: Adding ACPI-specified kcs state machine Nov 1 01:01:04.913544 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Nov 1 01:01:04.945223 kernel: iTCO_vendor_support: vendor-support=0 Nov 1 01:01:04.945271 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Nov 1 01:01:04.987233 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Nov 1 01:01:04.987455 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Nov 1 01:01:05.004230 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Nov 1 01:01:05.067491 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Nov 1 01:01:05.067620 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Nov 1 01:01:05.109990 kernel: intel_rapl_common: Found RAPL domain package Nov 1 01:01:05.110038 kernel: intel_rapl_common: Found RAPL domain core Nov 1 01:01:05.121226 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Nov 1 01:01:05.121327 kernel: intel_rapl_common: Found RAPL domain dram Nov 1 01:01:05.171227 kernel: ipmi_ssif: IPMI SSIF Interface driver Nov 1 01:01:05.189855 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 01:01:05.208691 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 01:01:05.233335 systemd[1]: Starting ensure-sysext.service... Nov 1 01:01:05.240835 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 01:01:05.251799 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 01:01:05.260283 lvm[1622]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:01:05.262951 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 01:01:05.275159 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 01:01:05.275760 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:01:05.277075 systemd[1]: Reloading requested from client PID 1621 ('systemctl') (unit ensure-sysext.service)... Nov 1 01:01:05.277083 systemd[1]: Reloading... Nov 1 01:01:05.283650 systemd-tmpfiles[1624]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 01:01:05.283874 systemd-tmpfiles[1624]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 01:01:05.284427 systemd-tmpfiles[1624]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 01:01:05.284610 systemd-tmpfiles[1624]: ACLs are not supported, ignoring. Nov 1 01:01:05.284651 systemd-tmpfiles[1624]: ACLs are not supported, ignoring. Nov 1 01:01:05.286686 systemd-tmpfiles[1624]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 01:01:05.286691 systemd-tmpfiles[1624]: Skipping /boot Nov 1 01:01:05.291311 systemd-tmpfiles[1624]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 01:01:05.291315 systemd-tmpfiles[1624]: Skipping /boot Nov 1 01:01:05.318232 zram_generator::config[1660]: No configuration found. Nov 1 01:01:05.357397 systemd-networkd[1507]: lo: Link UP Nov 1 01:01:05.357401 systemd-networkd[1507]: lo: Gained carrier Nov 1 01:01:05.360392 systemd-networkd[1507]: bond0: netdev ready Nov 1 01:01:05.361303 systemd-networkd[1507]: Enumeration completed Nov 1 01:01:05.371355 systemd-networkd[1507]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:42:d7:f0.network. Nov 1 01:01:05.379123 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:01:05.434228 systemd[1]: Reloading finished in 156 ms. Nov 1 01:01:05.446088 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 01:01:05.455377 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 01:01:05.477481 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 01:01:05.488476 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 01:01:05.499428 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:01:05.510472 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:01:05.524664 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:01:05.542403 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 01:01:05.553397 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 01:01:05.561028 augenrules[1746]: No rules Nov 1 01:01:05.565013 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 01:01:05.577046 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 01:01:05.579303 lvm[1751]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:01:05.599678 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 01:01:05.611340 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 01:01:05.635447 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 01:01:05.646923 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 01:01:05.656518 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 01:01:05.667506 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 01:01:05.678526 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 01:01:05.704935 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 01:01:05.717209 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:01:05.717330 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:01:05.729369 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:01:05.732152 systemd-resolved[1754]: Positive Trust Anchors: Nov 1 01:01:05.732159 systemd-resolved[1754]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:01:05.732199 systemd-resolved[1754]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 01:01:05.735008 systemd-resolved[1754]: Using system hostname 'ci-4081.3.6-n-13ad226fb7'. Nov 1 01:01:05.739976 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:01:05.751904 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:01:05.761356 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:01:05.762071 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 01:01:05.771305 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:01:05.771366 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:01:05.771997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:01:05.772071 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:01:05.783556 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:01:05.783626 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:01:05.794524 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:01:05.794595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:01:05.804509 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 01:01:05.816866 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:01:05.816992 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:01:05.826426 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:01:05.836926 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 01:01:05.846934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:01:05.858914 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:01:05.868353 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:01:05.868430 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:01:05.868482 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:01:05.869071 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:01:05.869144 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:01:05.880546 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:01:05.880615 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 01:01:05.890496 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:01:05.890565 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:01:05.901479 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:01:05.901550 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:01:05.912125 systemd[1]: Finished ensure-sysext.service. Nov 1 01:01:05.921662 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:01:05.921693 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 01:01:05.935423 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 01:01:05.970187 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 01:01:05.981292 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 01:01:06.364260 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Nov 1 01:01:06.387223 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Nov 1 01:01:06.388414 systemd-networkd[1507]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:42:d7:f1.network. Nov 1 01:01:06.389226 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 01:01:06.399411 systemd[1]: Reached target network.target - Network. Nov 1 01:01:06.407302 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:01:06.418407 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 01:01:06.428387 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 01:01:06.439334 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 01:01:06.450420 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 01:01:06.460337 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 01:01:06.471304 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 01:01:06.482317 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 01:01:06.482331 systemd[1]: Reached target paths.target - Path Units. Nov 1 01:01:06.490481 systemd[1]: Reached target timers.target - Timer Units. Nov 1 01:01:06.498681 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 01:01:06.509271 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 01:01:06.519452 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 01:01:06.529851 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 01:01:06.539520 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 01:01:06.549414 systemd[1]: Reached target basic.target - Basic System. Nov 1 01:01:06.560716 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 01:01:06.560732 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 01:01:06.574396 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 01:01:06.585305 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 01:01:06.596272 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 01:01:06.609261 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 01:01:06.619232 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Nov 1 01:01:06.619881 coreos-metadata[1789]: Nov 01 01:01:06.619 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:01:06.621735 coreos-metadata[1789]: Nov 01 01:01:06.621 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Nov 1 01:01:06.632608 dbus-daemon[1790]: [system] SELinux support is enabled Nov 1 01:01:06.636976 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 01:01:06.640427 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 01:01:06.640944 systemd-networkd[1507]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Nov 1 01:01:06.641223 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Nov 1 01:01:06.641371 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 01:01:06.642127 systemd-networkd[1507]: enp1s0f0np0: Link UP Nov 1 01:01:06.642329 systemd-networkd[1507]: enp1s0f0np0: Gained carrier Nov 1 01:01:06.643051 jq[1793]: false Nov 1 01:01:06.664017 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 01:01:06.664295 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 01:01:06.674394 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 01:01:06.675088 systemd-networkd[1507]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:42:d7:f0.network. Nov 1 01:01:06.675376 systemd-networkd[1507]: enp1s0f1np1: Link UP Nov 1 01:01:06.675611 systemd-networkd[1507]: enp1s0f1np1: Gained carrier Nov 1 01:01:06.681977 extend-filesystems[1795]: Found loop4 Nov 1 01:01:06.681977 extend-filesystems[1795]: Found loop5 Nov 1 01:01:06.730140 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Nov 1 01:01:06.730157 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 34 scanned by (udev-worker) (1504) Nov 1 01:01:06.730189 extend-filesystems[1795]: Found loop6 Nov 1 01:01:06.730189 extend-filesystems[1795]: Found loop7 Nov 1 01:01:06.730189 extend-filesystems[1795]: Found sda Nov 1 01:01:06.730189 extend-filesystems[1795]: Found sdb Nov 1 01:01:06.730189 extend-filesystems[1795]: Found sdb1 Nov 1 01:01:06.730189 extend-filesystems[1795]: Found sdb2 Nov 1 01:01:06.730189 extend-filesystems[1795]: Found sdb3 Nov 1 01:01:06.730189 extend-filesystems[1795]: Found usr Nov 1 01:01:06.730189 extend-filesystems[1795]: Found sdb4 Nov 1 01:01:06.730189 extend-filesystems[1795]: Found sdb6 Nov 1 01:01:06.730189 extend-filesystems[1795]: Found sdb7 Nov 1 01:01:06.730189 extend-filesystems[1795]: Found sdb9 Nov 1 01:01:06.730189 extend-filesystems[1795]: Checking size of /dev/sdb9 Nov 1 01:01:06.730189 extend-filesystems[1795]: Resized partition /dev/sdb9 Nov 1 01:01:06.917378 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Nov 1 01:01:06.917395 kernel: bond0: active interface up! Nov 1 01:01:06.917405 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Nov 1 01:01:06.685119 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 01:01:06.917495 extend-filesystems[1811]: resize2fs 1.47.1 (20-May-2024) Nov 1 01:01:06.688527 systemd-networkd[1507]: bond0: Link UP Nov 1 01:01:06.688790 systemd-networkd[1507]: bond0: Gained carrier Nov 1 01:01:06.688980 systemd-timesyncd[1785]: Network configuration changed, trying to establish connection. Nov 1 01:01:06.731644 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 01:01:06.937523 sshd_keygen[1819]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 01:01:06.798371 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Nov 1 01:01:06.937593 update_engine[1821]: I20251101 01:01:06.849776 1821 main.cc:92] Flatcar Update Engine starting Nov 1 01:01:06.937593 update_engine[1821]: I20251101 01:01:06.850433 1821 update_check_scheduler.cc:74] Next update check in 3m59s Nov 1 01:01:06.798671 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 01:01:06.937786 jq[1822]: true Nov 1 01:01:06.799053 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 01:01:06.823596 systemd-logind[1816]: Watching system buttons on /dev/input/event3 (Power Button) Nov 1 01:01:06.823606 systemd-logind[1816]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 1 01:01:06.823615 systemd-logind[1816]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Nov 1 01:01:06.823771 systemd-logind[1816]: New seat seat0. Nov 1 01:01:06.841300 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 01:01:06.849618 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 01:01:06.885178 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 01:01:06.917438 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 01:01:06.917548 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 01:01:06.917711 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 01:01:06.917810 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 01:01:06.961407 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 01:01:06.961497 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 01:01:06.972468 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 01:01:06.985197 (ntainerd)[1834]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 01:01:06.986613 jq[1833]: true Nov 1 01:01:06.988984 tar[1831]: linux-amd64/LICENSE Nov 1 01:01:06.989197 tar[1831]: linux-amd64/helm Nov 1 01:01:06.989272 dbus-daemon[1790]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 01:01:06.992640 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Nov 1 01:01:06.992740 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Nov 1 01:01:06.999505 systemd[1]: Started update-engine.service - Update Engine. Nov 1 01:01:07.010608 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 01:01:07.018357 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 01:01:07.018457 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 01:01:07.029386 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 01:01:07.029468 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 01:01:07.038985 bash[1862]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:01:07.041099 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 01:01:07.060456 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 01:01:07.071574 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 01:01:07.071680 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 01:01:07.078694 locksmithd[1869]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 01:01:07.093401 systemd[1]: Starting sshkeys.service... Nov 1 01:01:07.101027 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 01:01:07.113255 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 01:01:07.144482 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 01:01:07.155432 coreos-metadata[1888]: Nov 01 01:01:07.155 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 01:01:07.155727 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 01:01:07.160705 containerd[1834]: time="2025-11-01T01:01:07.160663856Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 01:01:07.173724 containerd[1834]: time="2025-11-01T01:01:07.173671532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:01:07.174414 containerd[1834]: time="2025-11-01T01:01:07.174391844Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:01:07.174439 containerd[1834]: time="2025-11-01T01:01:07.174413580Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 01:01:07.174439 containerd[1834]: time="2025-11-01T01:01:07.174424170Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 01:01:07.174519 containerd[1834]: time="2025-11-01T01:01:07.174511240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 01:01:07.174537 containerd[1834]: time="2025-11-01T01:01:07.174523437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 01:01:07.174580 containerd[1834]: time="2025-11-01T01:01:07.174570785Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:01:07.174597 containerd[1834]: time="2025-11-01T01:01:07.174580150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:01:07.174682 containerd[1834]: time="2025-11-01T01:01:07.174671734Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:01:07.174705 containerd[1834]: time="2025-11-01T01:01:07.174685972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 01:01:07.174720 containerd[1834]: time="2025-11-01T01:01:07.174700810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:01:07.174720 containerd[1834]: time="2025-11-01T01:01:07.174709055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 01:01:07.174760 containerd[1834]: time="2025-11-01T01:01:07.174752684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:01:07.174888 containerd[1834]: time="2025-11-01T01:01:07.174880543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:01:07.174952 containerd[1834]: time="2025-11-01T01:01:07.174939403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:01:07.174971 containerd[1834]: time="2025-11-01T01:01:07.174953912Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 01:01:07.175007 containerd[1834]: time="2025-11-01T01:01:07.175000366Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 01:01:07.175035 containerd[1834]: time="2025-11-01T01:01:07.175028450Z" level=info msg="metadata content store policy set" policy=shared Nov 1 01:01:07.184429 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 01:01:07.186484 containerd[1834]: time="2025-11-01T01:01:07.186471068Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 01:01:07.186511 containerd[1834]: time="2025-11-01T01:01:07.186495530Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 01:01:07.186511 containerd[1834]: time="2025-11-01T01:01:07.186505135Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 01:01:07.186543 containerd[1834]: time="2025-11-01T01:01:07.186516213Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 01:01:07.186543 containerd[1834]: time="2025-11-01T01:01:07.186524528Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 01:01:07.186606 containerd[1834]: time="2025-11-01T01:01:07.186595669Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 01:01:07.186745 containerd[1834]: time="2025-11-01T01:01:07.186729793Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 01:01:07.186834 containerd[1834]: time="2025-11-01T01:01:07.186822347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 01:01:07.186864 containerd[1834]: time="2025-11-01T01:01:07.186836061Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 01:01:07.186864 containerd[1834]: time="2025-11-01T01:01:07.186856151Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 01:01:07.186914 containerd[1834]: time="2025-11-01T01:01:07.186870439Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 01:01:07.186914 containerd[1834]: time="2025-11-01T01:01:07.186883895Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 01:01:07.186914 containerd[1834]: time="2025-11-01T01:01:07.186903020Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 01:01:07.186991 containerd[1834]: time="2025-11-01T01:01:07.186916858Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 01:01:07.186991 containerd[1834]: time="2025-11-01T01:01:07.186930617Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 01:01:07.186991 containerd[1834]: time="2025-11-01T01:01:07.186951509Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 01:01:07.186991 containerd[1834]: time="2025-11-01T01:01:07.186965292Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 01:01:07.186991 containerd[1834]: time="2025-11-01T01:01:07.186976541Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 01:01:07.187116 containerd[1834]: time="2025-11-01T01:01:07.187002584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187116 containerd[1834]: time="2025-11-01T01:01:07.187020559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187116 containerd[1834]: time="2025-11-01T01:01:07.187033652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187116 containerd[1834]: time="2025-11-01T01:01:07.187053514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187116 containerd[1834]: time="2025-11-01T01:01:07.187066007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187116 containerd[1834]: time="2025-11-01T01:01:07.187081959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187116 containerd[1834]: time="2025-11-01T01:01:07.187102823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187305 containerd[1834]: time="2025-11-01T01:01:07.187116159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187305 containerd[1834]: time="2025-11-01T01:01:07.187128816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187305 containerd[1834]: time="2025-11-01T01:01:07.187143658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187305 containerd[1834]: time="2025-11-01T01:01:07.187154963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187305 containerd[1834]: time="2025-11-01T01:01:07.187175002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187305 containerd[1834]: time="2025-11-01T01:01:07.187187645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187305 containerd[1834]: time="2025-11-01T01:01:07.187221821Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 01:01:07.187305 containerd[1834]: time="2025-11-01T01:01:07.187241946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187305 containerd[1834]: time="2025-11-01T01:01:07.187265144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187305 containerd[1834]: time="2025-11-01T01:01:07.187277199Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 01:01:07.187545 containerd[1834]: time="2025-11-01T01:01:07.187320222Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 01:01:07.187545 containerd[1834]: time="2025-11-01T01:01:07.187335733Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 01:01:07.187545 containerd[1834]: time="2025-11-01T01:01:07.187347973Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 01:01:07.187545 containerd[1834]: time="2025-11-01T01:01:07.187367144Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 01:01:07.187545 containerd[1834]: time="2025-11-01T01:01:07.187378227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187545 containerd[1834]: time="2025-11-01T01:01:07.187389775Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 01:01:07.187545 containerd[1834]: time="2025-11-01T01:01:07.187400237Z" level=info msg="NRI interface is disabled by configuration." Nov 1 01:01:07.187545 containerd[1834]: time="2025-11-01T01:01:07.187416830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 01:01:07.187757 containerd[1834]: time="2025-11-01T01:01:07.187711769Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 01:01:07.187869 containerd[1834]: time="2025-11-01T01:01:07.187762300Z" level=info msg="Connect containerd service" Nov 1 01:01:07.187869 containerd[1834]: time="2025-11-01T01:01:07.187788948Z" level=info msg="using legacy CRI server" Nov 1 01:01:07.187869 containerd[1834]: time="2025-11-01T01:01:07.187796212Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 01:01:07.187950 containerd[1834]: time="2025-11-01T01:01:07.187870282Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 01:01:07.188251 containerd[1834]: time="2025-11-01T01:01:07.188238146Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:01:07.188352 containerd[1834]: time="2025-11-01T01:01:07.188331291Z" level=info msg="Start subscribing containerd event" Nov 1 01:01:07.188382 containerd[1834]: time="2025-11-01T01:01:07.188359715Z" level=info msg="Start recovering state" Nov 1 01:01:07.188689 containerd[1834]: time="2025-11-01T01:01:07.188672377Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 01:01:07.188730 containerd[1834]: time="2025-11-01T01:01:07.188674152Z" level=info msg="Start event monitor" Nov 1 01:01:07.188755 containerd[1834]: time="2025-11-01T01:01:07.188716262Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 01:01:07.188784 containerd[1834]: time="2025-11-01T01:01:07.188766926Z" level=info msg="Start snapshots syncer" Nov 1 01:01:07.188799 containerd[1834]: time="2025-11-01T01:01:07.188789113Z" level=info msg="Start cni network conf syncer for default" Nov 1 01:01:07.188823 containerd[1834]: time="2025-11-01T01:01:07.188797078Z" level=info msg="Start streaming server" Nov 1 01:01:07.188851 containerd[1834]: time="2025-11-01T01:01:07.188841965Z" level=info msg="containerd successfully booted in 0.028600s" Nov 1 01:01:07.193108 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Nov 1 01:01:07.202474 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 01:01:07.210663 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 01:01:07.263226 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Nov 1 01:01:07.287018 extend-filesystems[1811]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Nov 1 01:01:07.287018 extend-filesystems[1811]: old_desc_blocks = 1, new_desc_blocks = 56 Nov 1 01:01:07.287018 extend-filesystems[1811]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Nov 1 01:01:07.316361 extend-filesystems[1795]: Resized filesystem in /dev/sdb9 Nov 1 01:01:07.334325 tar[1831]: linux-amd64/README.md Nov 1 01:01:07.287533 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 01:01:07.287633 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 01:01:07.336425 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 01:01:07.621810 coreos-metadata[1789]: Nov 01 01:01:07.621 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Nov 1 01:01:08.161594 systemd-networkd[1507]: bond0: Gained IPv6LL Nov 1 01:01:08.482827 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 01:01:08.494722 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 01:01:08.520394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:01:08.531020 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 01:01:08.557897 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 01:01:09.292026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:01:09.312444 (kubelet)[1931]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:01:09.762975 kubelet[1931]: E1101 01:01:09.762918 1931 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:01:09.764044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:01:09.764124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:01:09.969297 systemd-timesyncd[1785]: Contacted time server 45.79.51.42:123 (0.flatcar.pool.ntp.org). Nov 1 01:01:09.969319 systemd-timesyncd[1785]: Initial clock synchronization to Sat 2025-11-01 01:01:10.212629 UTC. Nov 1 01:01:10.212010 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 01:01:10.232502 systemd[1]: Started sshd@0-145.40.82.59:22-139.178.89.65:55028.service - OpenSSH per-connection server daemon (139.178.89.65:55028). Nov 1 01:01:10.286147 sshd[1948]: Accepted publickey for core from 139.178.89.65 port 55028 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:01:10.286353 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Nov 1 01:01:10.286459 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Nov 1 01:01:10.287201 sshd[1948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:01:10.293427 systemd-logind[1816]: New session 1 of user core. Nov 1 01:01:10.294412 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 01:01:10.326588 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 01:01:10.343514 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 01:01:10.367590 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 01:01:10.378759 (systemd)[1952]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:01:10.458947 systemd[1952]: Queued start job for default target default.target. Nov 1 01:01:10.467831 systemd[1952]: Created slice app.slice - User Application Slice. Nov 1 01:01:10.467847 systemd[1952]: Reached target paths.target - Paths. Nov 1 01:01:10.467855 systemd[1952]: Reached target timers.target - Timers. Nov 1 01:01:10.468530 systemd[1952]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 01:01:10.474263 systemd[1952]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 01:01:10.474331 systemd[1952]: Reached target sockets.target - Sockets. Nov 1 01:01:10.474342 systemd[1952]: Reached target basic.target - Basic System. Nov 1 01:01:10.474363 systemd[1952]: Reached target default.target - Main User Target. Nov 1 01:01:10.474379 systemd[1952]: Startup finished in 90ms. Nov 1 01:01:10.474424 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 01:01:10.485170 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 01:01:10.550404 systemd[1]: Started sshd@1-145.40.82.59:22-139.178.89.65:55034.service - OpenSSH per-connection server daemon (139.178.89.65:55034). Nov 1 01:01:10.587003 sshd[1965]: Accepted publickey for core from 139.178.89.65 port 55034 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:01:10.587735 sshd[1965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:01:10.589939 systemd-logind[1816]: New session 2 of user core. Nov 1 01:01:10.600387 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 01:01:10.658010 sshd[1965]: pam_unix(sshd:session): session closed for user core Nov 1 01:01:10.668788 systemd[1]: sshd@1-145.40.82.59:22-139.178.89.65:55034.service: Deactivated successfully. Nov 1 01:01:10.669531 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 01:01:10.670147 systemd-logind[1816]: Session 2 logged out. Waiting for processes to exit. Nov 1 01:01:10.670824 systemd[1]: Started sshd@2-145.40.82.59:22-139.178.89.65:55040.service - OpenSSH per-connection server daemon (139.178.89.65:55040). Nov 1 01:01:10.683143 systemd-logind[1816]: Removed session 2. Nov 1 01:01:10.707647 sshd[1972]: Accepted publickey for core from 139.178.89.65 port 55040 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:01:10.708309 sshd[1972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:01:10.710759 systemd-logind[1816]: New session 3 of user core. Nov 1 01:01:10.721399 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 01:01:10.779567 sshd[1972]: pam_unix(sshd:session): session closed for user core Nov 1 01:01:10.780824 systemd[1]: sshd@2-145.40.82.59:22-139.178.89.65:55040.service: Deactivated successfully. Nov 1 01:01:10.781673 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 01:01:10.782236 systemd-logind[1816]: Session 3 logged out. Waiting for processes to exit. Nov 1 01:01:10.782847 systemd-logind[1816]: Removed session 3. Nov 1 01:01:11.156927 coreos-metadata[1789]: Nov 01 01:01:11.156 INFO Fetch successful Nov 1 01:01:11.216960 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 01:01:11.227523 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Nov 1 01:01:12.157110 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Nov 1 01:01:12.230974 login[1901]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:01:12.231857 login[1906]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:01:12.233791 systemd-logind[1816]: New session 5 of user core. Nov 1 01:01:12.248410 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 01:01:12.249841 systemd-logind[1816]: New session 4 of user core. Nov 1 01:01:12.250672 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 01:01:13.820856 coreos-metadata[1888]: Nov 01 01:01:13.820 INFO Fetch successful Nov 1 01:01:13.905600 unknown[1888]: wrote ssh authorized keys file for user: core Nov 1 01:01:13.939968 update-ssh-keys[2011]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:01:13.940341 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 01:01:13.941421 systemd[1]: Finished sshkeys.service. Nov 1 01:01:13.942562 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 01:01:13.942789 systemd[1]: Startup finished in 1.758s (kernel) + 32.993s (initrd) + 13.089s (userspace) = 47.841s. Nov 1 01:01:19.950225 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 01:01:19.963601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:01:20.210123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:01:20.212326 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:01:20.240188 kubelet[2026]: E1101 01:01:20.240123 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:01:20.242212 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:01:20.242338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:01:20.981118 systemd[1]: Started sshd@3-145.40.82.59:22-139.178.89.65:48792.service - OpenSSH per-connection server daemon (139.178.89.65:48792). Nov 1 01:01:21.010170 sshd[2044]: Accepted publickey for core from 139.178.89.65 port 48792 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:01:21.010901 sshd[2044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:01:21.013724 systemd-logind[1816]: New session 6 of user core. Nov 1 01:01:21.026502 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 01:01:21.080113 sshd[2044]: pam_unix(sshd:session): session closed for user core Nov 1 01:01:21.093903 systemd[1]: sshd@3-145.40.82.59:22-139.178.89.65:48792.service: Deactivated successfully. Nov 1 01:01:21.094666 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 01:01:21.095468 systemd-logind[1816]: Session 6 logged out. Waiting for processes to exit. Nov 1 01:01:21.096131 systemd[1]: Started sshd@4-145.40.82.59:22-139.178.89.65:48808.service - OpenSSH per-connection server daemon (139.178.89.65:48808). Nov 1 01:01:21.096784 systemd-logind[1816]: Removed session 6. Nov 1 01:01:21.127533 sshd[2051]: Accepted publickey for core from 139.178.89.65 port 48808 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:01:21.128819 sshd[2051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:01:21.133789 systemd-logind[1816]: New session 7 of user core. Nov 1 01:01:21.151752 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 01:01:21.210793 sshd[2051]: pam_unix(sshd:session): session closed for user core Nov 1 01:01:21.219859 systemd[1]: sshd@4-145.40.82.59:22-139.178.89.65:48808.service: Deactivated successfully. Nov 1 01:01:21.220621 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 01:01:21.221328 systemd-logind[1816]: Session 7 logged out. Waiting for processes to exit. Nov 1 01:01:21.221935 systemd[1]: Started sshd@5-145.40.82.59:22-139.178.89.65:48820.service - OpenSSH per-connection server daemon (139.178.89.65:48820). Nov 1 01:01:21.222475 systemd-logind[1816]: Removed session 7. Nov 1 01:01:21.253246 sshd[2058]: Accepted publickey for core from 139.178.89.65 port 48820 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:01:21.254698 sshd[2058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:01:21.260021 systemd-logind[1816]: New session 8 of user core. Nov 1 01:01:21.273673 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 01:01:21.344802 sshd[2058]: pam_unix(sshd:session): session closed for user core Nov 1 01:01:21.361065 systemd[1]: sshd@5-145.40.82.59:22-139.178.89.65:48820.service: Deactivated successfully. Nov 1 01:01:21.364752 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 01:01:21.368305 systemd-logind[1816]: Session 8 logged out. Waiting for processes to exit. Nov 1 01:01:21.381965 systemd[1]: Started sshd@6-145.40.82.59:22-139.178.89.65:48834.service - OpenSSH per-connection server daemon (139.178.89.65:48834). Nov 1 01:01:21.384305 systemd-logind[1816]: Removed session 8. Nov 1 01:01:21.431922 sshd[2065]: Accepted publickey for core from 139.178.89.65 port 48834 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:01:21.433399 sshd[2065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:01:21.438200 systemd-logind[1816]: New session 9 of user core. Nov 1 01:01:21.453609 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 01:01:21.522878 sudo[2069]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 01:01:21.523031 sudo[2069]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:01:21.546072 sudo[2069]: pam_unix(sudo:session): session closed for user root Nov 1 01:01:21.547056 sshd[2065]: pam_unix(sshd:session): session closed for user core Nov 1 01:01:21.556322 systemd[1]: sshd@6-145.40.82.59:22-139.178.89.65:48834.service: Deactivated successfully. Nov 1 01:01:21.557374 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 01:01:21.558362 systemd-logind[1816]: Session 9 logged out. Waiting for processes to exit. Nov 1 01:01:21.559346 systemd[1]: Started sshd@7-145.40.82.59:22-139.178.89.65:48844.service - OpenSSH per-connection server daemon (139.178.89.65:48844). Nov 1 01:01:21.560141 systemd-logind[1816]: Removed session 9. Nov 1 01:01:21.597566 sshd[2074]: Accepted publickey for core from 139.178.89.65 port 48844 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:01:21.598305 sshd[2074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:01:21.601039 systemd-logind[1816]: New session 10 of user core. Nov 1 01:01:21.619546 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 01:01:21.673592 sudo[2078]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 01:01:21.673744 sudo[2078]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:01:21.675879 sudo[2078]: pam_unix(sudo:session): session closed for user root Nov 1 01:01:21.678605 sudo[2077]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 01:01:21.678763 sudo[2077]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:01:21.692591 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 01:01:21.693705 auditctl[2081]: No rules Nov 1 01:01:21.693921 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 01:01:21.694037 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 01:01:21.695606 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 01:01:21.712143 augenrules[2099]: No rules Nov 1 01:01:21.712867 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 01:01:21.713489 sudo[2077]: pam_unix(sudo:session): session closed for user root Nov 1 01:01:21.714390 sshd[2074]: pam_unix(sshd:session): session closed for user core Nov 1 01:01:21.716485 systemd[1]: sshd@7-145.40.82.59:22-139.178.89.65:48844.service: Deactivated successfully. Nov 1 01:01:21.717272 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 01:01:21.717746 systemd-logind[1816]: Session 10 logged out. Waiting for processes to exit. Nov 1 01:01:21.718687 systemd[1]: Started sshd@8-145.40.82.59:22-139.178.89.65:48850.service - OpenSSH per-connection server daemon (139.178.89.65:48850). Nov 1 01:01:21.719131 systemd-logind[1816]: Removed session 10. Nov 1 01:01:21.750891 sshd[2107]: Accepted publickey for core from 139.178.89.65 port 48850 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:01:21.752324 sshd[2107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:01:21.757784 systemd-logind[1816]: New session 11 of user core. Nov 1 01:01:21.766636 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 01:01:21.825604 sudo[2110]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 01:01:21.825763 sudo[2110]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:01:22.110479 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 01:01:22.110531 (dockerd)[2134]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 01:01:22.406301 dockerd[2134]: time="2025-11-01T01:01:22.406198604Z" level=info msg="Starting up" Nov 1 01:01:22.475339 dockerd[2134]: time="2025-11-01T01:01:22.475237167Z" level=info msg="Loading containers: start." Nov 1 01:01:22.572260 kernel: Initializing XFRM netlink socket Nov 1 01:01:22.658414 systemd-networkd[1507]: docker0: Link UP Nov 1 01:01:22.677282 dockerd[2134]: time="2025-11-01T01:01:22.677206281Z" level=info msg="Loading containers: done." Nov 1 01:01:22.695830 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2521315664-merged.mount: Deactivated successfully. Nov 1 01:01:22.701122 dockerd[2134]: time="2025-11-01T01:01:22.701075971Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 01:01:22.701189 dockerd[2134]: time="2025-11-01T01:01:22.701125155Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 01:01:22.701278 dockerd[2134]: time="2025-11-01T01:01:22.701244830Z" level=info msg="Daemon has completed initialization" Nov 1 01:01:22.716103 dockerd[2134]: time="2025-11-01T01:01:22.716074895Z" level=info msg="API listen on /run/docker.sock" Nov 1 01:01:22.716197 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 01:01:23.539924 containerd[1834]: time="2025-11-01T01:01:23.539880840Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 1 01:01:24.126158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3995282445.mount: Deactivated successfully. Nov 1 01:01:24.941724 containerd[1834]: time="2025-11-01T01:01:24.941674458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:24.941937 containerd[1834]: time="2025-11-01T01:01:24.941860746Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 1 01:01:24.942309 containerd[1834]: time="2025-11-01T01:01:24.942272968Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:24.943881 containerd[1834]: time="2025-11-01T01:01:24.943844087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:24.944534 containerd[1834]: time="2025-11-01T01:01:24.944494749Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.404589505s" Nov 1 01:01:24.944534 containerd[1834]: time="2025-11-01T01:01:24.944515612Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 1 01:01:24.944900 containerd[1834]: time="2025-11-01T01:01:24.944863084Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 1 01:01:26.027929 containerd[1834]: time="2025-11-01T01:01:26.027897918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:26.028202 containerd[1834]: time="2025-11-01T01:01:26.028092030Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 1 01:01:26.028637 containerd[1834]: time="2025-11-01T01:01:26.028624424Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:26.030192 containerd[1834]: time="2025-11-01T01:01:26.030179204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:26.030875 containerd[1834]: time="2025-11-01T01:01:26.030860463Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.085981985s" Nov 1 01:01:26.030922 containerd[1834]: time="2025-11-01T01:01:26.030878428Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 1 01:01:26.031146 containerd[1834]: time="2025-11-01T01:01:26.031136315Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 1 01:01:27.100714 containerd[1834]: time="2025-11-01T01:01:27.100687268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:27.100945 containerd[1834]: time="2025-11-01T01:01:27.100929420Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 1 01:01:27.101298 containerd[1834]: time="2025-11-01T01:01:27.101286845Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:27.102865 containerd[1834]: time="2025-11-01T01:01:27.102851868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:27.103492 containerd[1834]: time="2025-11-01T01:01:27.103478474Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.072327053s" Nov 1 01:01:27.103516 containerd[1834]: time="2025-11-01T01:01:27.103496193Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 1 01:01:27.103769 containerd[1834]: time="2025-11-01T01:01:27.103759103Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 1 01:01:27.995638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432197228.mount: Deactivated successfully. Nov 1 01:01:28.192076 containerd[1834]: time="2025-11-01T01:01:28.192046701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:28.192317 containerd[1834]: time="2025-11-01T01:01:28.192240463Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 1 01:01:28.192800 containerd[1834]: time="2025-11-01T01:01:28.192779118Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:28.194193 containerd[1834]: time="2025-11-01T01:01:28.194179063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:28.194583 containerd[1834]: time="2025-11-01T01:01:28.194568665Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.090793896s" Nov 1 01:01:28.194625 containerd[1834]: time="2025-11-01T01:01:28.194586396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 1 01:01:28.194860 containerd[1834]: time="2025-11-01T01:01:28.194849572Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 1 01:01:28.780787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3659863039.mount: Deactivated successfully. Nov 1 01:01:29.407738 containerd[1834]: time="2025-11-01T01:01:29.407711328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:29.407956 containerd[1834]: time="2025-11-01T01:01:29.407885288Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 1 01:01:29.408340 containerd[1834]: time="2025-11-01T01:01:29.408327513Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:29.410104 containerd[1834]: time="2025-11-01T01:01:29.410090627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:29.410793 containerd[1834]: time="2025-11-01T01:01:29.410779241Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.215913772s" Nov 1 01:01:29.410827 containerd[1834]: time="2025-11-01T01:01:29.410795363Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 1 01:01:29.411061 containerd[1834]: time="2025-11-01T01:01:29.411048727Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 01:01:29.896458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3811164137.mount: Deactivated successfully. Nov 1 01:01:29.897685 containerd[1834]: time="2025-11-01T01:01:29.897670059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:29.897865 containerd[1834]: time="2025-11-01T01:01:29.897844319Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 1 01:01:29.898266 containerd[1834]: time="2025-11-01T01:01:29.898254423Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:29.899368 containerd[1834]: time="2025-11-01T01:01:29.899354895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:29.899777 containerd[1834]: time="2025-11-01T01:01:29.899765173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 488.70068ms" Nov 1 01:01:29.899807 containerd[1834]: time="2025-11-01T01:01:29.899780917Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 01:01:29.900121 containerd[1834]: time="2025-11-01T01:01:29.900108757Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 1 01:01:30.432389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 01:01:30.444450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:01:30.703691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4097765996.mount: Deactivated successfully. Nov 1 01:01:30.705092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:01:30.707855 (kubelet)[2441]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:01:30.732124 kubelet[2441]: E1101 01:01:30.732101 2441 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:01:30.733668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:01:30.733809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:01:31.819911 containerd[1834]: time="2025-11-01T01:01:31.819858544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:31.820125 containerd[1834]: time="2025-11-01T01:01:31.820013103Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 1 01:01:31.820568 containerd[1834]: time="2025-11-01T01:01:31.820528624Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:31.822698 containerd[1834]: time="2025-11-01T01:01:31.822659056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:31.823332 containerd[1834]: time="2025-11-01T01:01:31.823277118Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.923148296s" Nov 1 01:01:31.823332 containerd[1834]: time="2025-11-01T01:01:31.823299025Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 1 01:01:34.226941 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:01:34.237581 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:01:34.249776 systemd[1]: Reloading requested from client PID 2562 ('systemctl') (unit session-11.scope)... Nov 1 01:01:34.249783 systemd[1]: Reloading... Nov 1 01:01:34.299302 zram_generator::config[2601]: No configuration found. Nov 1 01:01:34.366283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:01:34.427725 systemd[1]: Reloading finished in 177 ms. Nov 1 01:01:34.470233 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 01:01:34.470291 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 01:01:34.470427 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:01:34.486623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:01:34.754348 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:01:34.760126 (kubelet)[2665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 01:01:34.779433 kubelet[2665]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:01:34.779433 kubelet[2665]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:01:34.779433 kubelet[2665]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:01:34.779643 kubelet[2665]: I1101 01:01:34.779460 2665 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:01:35.086633 kubelet[2665]: I1101 01:01:35.086568 2665 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 01:01:35.086633 kubelet[2665]: I1101 01:01:35.086579 2665 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:01:35.086696 kubelet[2665]: I1101 01:01:35.086685 2665 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 01:01:35.109276 kubelet[2665]: I1101 01:01:35.109267 2665 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:01:35.109846 kubelet[2665]: E1101 01:01:35.109833 2665 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://145.40.82.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 145.40.82.59:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 01:01:35.114824 kubelet[2665]: E1101 01:01:35.114790 2665 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:01:35.114824 kubelet[2665]: I1101 01:01:35.114821 2665 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 01:01:35.123512 kubelet[2665]: I1101 01:01:35.123468 2665 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 01:01:35.123609 kubelet[2665]: I1101 01:01:35.123569 2665 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:01:35.123693 kubelet[2665]: I1101 01:01:35.123580 2665 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-13ad226fb7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 01:01:35.123693 kubelet[2665]: I1101 01:01:35.123668 2665 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:01:35.123693 kubelet[2665]: I1101 01:01:35.123674 2665 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 01:01:35.124697 kubelet[2665]: I1101 01:01:35.124662 2665 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:01:35.127922 kubelet[2665]: I1101 01:01:35.127886 2665 kubelet.go:480] "Attempting to sync node with API server" Nov 1 01:01:35.127922 kubelet[2665]: I1101 01:01:35.127899 2665 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:01:35.127922 kubelet[2665]: I1101 01:01:35.127914 2665 kubelet.go:386] "Adding apiserver pod source" Nov 1 01:01:35.127922 kubelet[2665]: I1101 01:01:35.127922 2665 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:01:35.132127 kubelet[2665]: I1101 01:01:35.132116 2665 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 01:01:35.132468 kubelet[2665]: I1101 01:01:35.132435 2665 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 01:01:35.132910 kubelet[2665]: W1101 01:01:35.132902 2665 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 01:01:35.134199 kubelet[2665]: I1101 01:01:35.134174 2665 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 01:01:35.134262 kubelet[2665]: I1101 01:01:35.134234 2665 server.go:1289] "Started kubelet" Nov 1 01:01:35.134361 kubelet[2665]: I1101 01:01:35.134314 2665 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:01:35.134603 kubelet[2665]: E1101 01:01:35.134576 2665 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://145.40.82.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-13ad226fb7&limit=500&resourceVersion=0\": dial tcp 145.40.82.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 01:01:35.136235 kubelet[2665]: E1101 01:01:35.134507 2665 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://145.40.82.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 145.40.82.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 01:01:35.136305 kubelet[2665]: I1101 01:01:35.136234 2665 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:01:35.136305 kubelet[2665]: I1101 01:01:35.136240 2665 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:01:35.137253 kubelet[2665]: I1101 01:01:35.137243 2665 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:01:35.139231 kubelet[2665]: E1101 01:01:35.139194 2665 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-13ad226fb7\" not found" Nov 1 01:01:35.139231 kubelet[2665]: I1101 01:01:35.139231 2665 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 01:01:35.139340 kubelet[2665]: I1101 01:01:35.139241 2665 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 01:01:35.139340 kubelet[2665]: I1101 01:01:35.139304 2665 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:01:35.139340 kubelet[2665]: I1101 01:01:35.139334 2665 reconciler.go:26] "Reconciler: start to sync state" Nov 1 01:01:35.139598 kubelet[2665]: E1101 01:01:35.139582 2665 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://145.40.82.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 145.40.82.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 01:01:35.139639 kubelet[2665]: E1101 01:01:35.139590 2665 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.82.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-13ad226fb7?timeout=10s\": dial tcp 145.40.82.59:6443: connect: connection refused" interval="200ms" Nov 1 01:01:35.139739 kubelet[2665]: I1101 01:01:35.139729 2665 server.go:317] "Adding debug handlers to kubelet server" Nov 1 01:01:35.139768 kubelet[2665]: I1101 01:01:35.139755 2665 factory.go:223] Registration of the systemd container factory successfully Nov 1 01:01:35.139814 kubelet[2665]: I1101 01:01:35.139801 2665 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:01:35.139907 kubelet[2665]: E1101 01:01:35.139896 2665 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:01:35.140249 kubelet[2665]: I1101 01:01:35.140242 2665 factory.go:223] Registration of the containerd container factory successfully Nov 1 01:01:35.141356 kubelet[2665]: E1101 01:01:35.140432 2665 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://145.40.82.59:6443/api/v1/namespaces/default/events\": dial tcp 145.40.82.59:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-13ad226fb7.1873bc47a5504631 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-13ad226fb7,UID:ci-4081.3.6-n-13ad226fb7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-13ad226fb7,},FirstTimestamp:2025-11-01 01:01:35.134197297 +0000 UTC m=+0.371790211,LastTimestamp:2025-11-01 01:01:35.134197297 +0000 UTC m=+0.371790211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-13ad226fb7,}" Nov 1 01:01:35.148866 kubelet[2665]: I1101 01:01:35.148844 2665 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 01:01:35.149401 kubelet[2665]: I1101 01:01:35.149391 2665 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:01:35.149401 kubelet[2665]: I1101 01:01:35.149399 2665 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:01:35.149449 kubelet[2665]: I1101 01:01:35.149408 2665 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:01:35.149449 kubelet[2665]: I1101 01:01:35.149444 2665 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 01:01:35.149487 kubelet[2665]: I1101 01:01:35.149455 2665 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 01:01:35.149487 kubelet[2665]: I1101 01:01:35.149468 2665 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:01:35.149487 kubelet[2665]: I1101 01:01:35.149476 2665 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 01:01:35.149538 kubelet[2665]: E1101 01:01:35.149499 2665 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:01:35.149706 kubelet[2665]: E1101 01:01:35.149688 2665 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://145.40.82.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 145.40.82.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 01:01:35.150380 kubelet[2665]: I1101 01:01:35.150372 2665 policy_none.go:49] "None policy: Start" Nov 1 01:01:35.150406 kubelet[2665]: I1101 01:01:35.150383 2665 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 01:01:35.150406 kubelet[2665]: I1101 01:01:35.150393 2665 state_mem.go:35] "Initializing new in-memory state store" Nov 1 01:01:35.152683 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 01:01:35.169997 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 01:01:35.171893 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 01:01:35.183920 kubelet[2665]: E1101 01:01:35.183883 2665 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 01:01:35.184078 kubelet[2665]: I1101 01:01:35.184038 2665 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:01:35.184078 kubelet[2665]: I1101 01:01:35.184049 2665 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:01:35.184213 kubelet[2665]: I1101 01:01:35.184201 2665 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:01:35.184624 kubelet[2665]: E1101 01:01:35.184573 2665 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:01:35.184624 kubelet[2665]: E1101 01:01:35.184616 2665 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-13ad226fb7\" not found" Nov 1 01:01:35.261175 systemd[1]: Created slice kubepods-burstable-podb6ddb596a661251d1023235a1d735534.slice - libcontainer container kubepods-burstable-podb6ddb596a661251d1023235a1d735534.slice. Nov 1 01:01:35.288459 kubelet[2665]: I1101 01:01:35.288371 2665 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.289132 kubelet[2665]: E1101 01:01:35.289024 2665 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.82.59:6443/api/v1/nodes\": dial tcp 145.40.82.59:6443: connect: connection refused" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.292003 kubelet[2665]: E1101 01:01:35.291926 2665 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-13ad226fb7\" not found" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.300818 systemd[1]: Created slice kubepods-burstable-pod258c3aaefb23e2561e9fe0da0a69970a.slice - libcontainer container kubepods-burstable-pod258c3aaefb23e2561e9fe0da0a69970a.slice. Nov 1 01:01:35.305145 kubelet[2665]: E1101 01:01:35.305053 2665 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-13ad226fb7\" not found" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.309763 systemd[1]: Created slice kubepods-burstable-pod7e9c42a70baee4d1044b2fa8b276af66.slice - libcontainer container kubepods-burstable-pod7e9c42a70baee4d1044b2fa8b276af66.slice. Nov 1 01:01:35.313680 kubelet[2665]: E1101 01:01:35.313605 2665 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-13ad226fb7\" not found" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.340549 kubelet[2665]: I1101 01:01:35.340319 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/258c3aaefb23e2561e9fe0da0a69970a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-13ad226fb7\" (UID: \"258c3aaefb23e2561e9fe0da0a69970a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.340549 kubelet[2665]: I1101 01:01:35.340414 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e9c42a70baee4d1044b2fa8b276af66-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-13ad226fb7\" (UID: \"7e9c42a70baee4d1044b2fa8b276af66\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.340549 kubelet[2665]: I1101 01:01:35.340472 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/258c3aaefb23e2561e9fe0da0a69970a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-13ad226fb7\" (UID: \"258c3aaefb23e2561e9fe0da0a69970a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.340549 kubelet[2665]: I1101 01:01:35.340524 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6ddb596a661251d1023235a1d735534-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-13ad226fb7\" (UID: \"b6ddb596a661251d1023235a1d735534\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.341040 kubelet[2665]: E1101 01:01:35.340553 2665 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.82.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-13ad226fb7?timeout=10s\": dial tcp 145.40.82.59:6443: connect: connection refused" interval="400ms" Nov 1 01:01:35.341040 kubelet[2665]: I1101 01:01:35.340575 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6ddb596a661251d1023235a1d735534-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-13ad226fb7\" (UID: \"b6ddb596a661251d1023235a1d735534\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.341040 kubelet[2665]: I1101 01:01:35.340699 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6ddb596a661251d1023235a1d735534-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-13ad226fb7\" (UID: \"b6ddb596a661251d1023235a1d735534\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.341040 kubelet[2665]: I1101 01:01:35.340766 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/258c3aaefb23e2561e9fe0da0a69970a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-13ad226fb7\" (UID: \"258c3aaefb23e2561e9fe0da0a69970a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.341040 kubelet[2665]: I1101 01:01:35.340918 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/258c3aaefb23e2561e9fe0da0a69970a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-13ad226fb7\" (UID: \"258c3aaefb23e2561e9fe0da0a69970a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.341536 kubelet[2665]: I1101 01:01:35.341070 2665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/258c3aaefb23e2561e9fe0da0a69970a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-13ad226fb7\" (UID: \"258c3aaefb23e2561e9fe0da0a69970a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.494006 kubelet[2665]: I1101 01:01:35.493903 2665 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.494744 kubelet[2665]: E1101 01:01:35.494630 2665 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.82.59:6443/api/v1/nodes\": dial tcp 145.40.82.59:6443: connect: connection refused" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.595301 containerd[1834]: time="2025-11-01T01:01:35.595015997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-13ad226fb7,Uid:b6ddb596a661251d1023235a1d735534,Namespace:kube-system,Attempt:0,}" Nov 1 01:01:35.606556 containerd[1834]: time="2025-11-01T01:01:35.606514513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-13ad226fb7,Uid:258c3aaefb23e2561e9fe0da0a69970a,Namespace:kube-system,Attempt:0,}" Nov 1 01:01:35.614666 containerd[1834]: time="2025-11-01T01:01:35.614637691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-13ad226fb7,Uid:7e9c42a70baee4d1044b2fa8b276af66,Namespace:kube-system,Attempt:0,}" Nov 1 01:01:35.741900 kubelet[2665]: E1101 01:01:35.741839 2665 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://145.40.82.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-13ad226fb7?timeout=10s\": dial tcp 145.40.82.59:6443: connect: connection refused" interval="800ms" Nov 1 01:01:35.896974 kubelet[2665]: I1101 01:01:35.896875 2665 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:35.897247 kubelet[2665]: E1101 01:01:35.897120 2665 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://145.40.82.59:6443/api/v1/nodes\": dial tcp 145.40.82.59:6443: connect: connection refused" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:36.144504 kubelet[2665]: E1101 01:01:36.144423 2665 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://145.40.82.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 145.40.82.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 01:01:36.169574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3850694195.mount: Deactivated successfully. Nov 1 01:01:36.171612 containerd[1834]: time="2025-11-01T01:01:36.171595019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:01:36.171882 containerd[1834]: time="2025-11-01T01:01:36.171863963Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 1 01:01:36.172183 containerd[1834]: time="2025-11-01T01:01:36.172171645Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:01:36.172637 containerd[1834]: time="2025-11-01T01:01:36.172625078Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:01:36.172982 containerd[1834]: time="2025-11-01T01:01:36.172966244Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 01:01:36.173148 containerd[1834]: time="2025-11-01T01:01:36.173127392Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 01:01:36.173517 containerd[1834]: time="2025-11-01T01:01:36.173470532Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:01:36.173656 kubelet[2665]: E1101 01:01:36.173624 2665 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://145.40.82.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 145.40.82.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 01:01:36.175457 containerd[1834]: time="2025-11-01T01:01:36.175416622Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 580.230402ms" Nov 1 01:01:36.176345 containerd[1834]: time="2025-11-01T01:01:36.176331856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:01:36.176844 containerd[1834]: time="2025-11-01T01:01:36.176830184Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.277642ms" Nov 1 01:01:36.178501 containerd[1834]: time="2025-11-01T01:01:36.178485801Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 563.812459ms" Nov 1 01:01:36.254082 kubelet[2665]: E1101 01:01:36.254033 2665 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://145.40.82.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 145.40.82.59:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 01:01:36.268146 containerd[1834]: time="2025-11-01T01:01:36.267914008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:36.268146 containerd[1834]: time="2025-11-01T01:01:36.268131935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:36.268146 containerd[1834]: time="2025-11-01T01:01:36.268140094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:36.268277 containerd[1834]: time="2025-11-01T01:01:36.267958563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:36.268277 containerd[1834]: time="2025-11-01T01:01:36.267956586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:36.268277 containerd[1834]: time="2025-11-01T01:01:36.268176997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:36.268277 containerd[1834]: time="2025-11-01T01:01:36.268175080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:36.268277 containerd[1834]: time="2025-11-01T01:01:36.268184488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:36.268277 containerd[1834]: time="2025-11-01T01:01:36.268186858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:36.268277 containerd[1834]: time="2025-11-01T01:01:36.268185364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:36.268277 containerd[1834]: time="2025-11-01T01:01:36.268232353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:36.268277 containerd[1834]: time="2025-11-01T01:01:36.268242184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:36.289396 systemd[1]: Started cri-containerd-03ce1781d5f81ead3818a39fdb2b12e6a4e10c5723027fb99a208f710cc8faac.scope - libcontainer container 03ce1781d5f81ead3818a39fdb2b12e6a4e10c5723027fb99a208f710cc8faac. Nov 1 01:01:36.290149 systemd[1]: Started cri-containerd-1935ba249949486be933dbceb4cac6c095a29ede1f30df6a86965515e47051c1.scope - libcontainer container 1935ba249949486be933dbceb4cac6c095a29ede1f30df6a86965515e47051c1. Nov 1 01:01:36.290983 systemd[1]: Started cri-containerd-5e876614be1309d9952f086099f88f473b125ee195df7fef31f7b55a9d88509b.scope - libcontainer container 5e876614be1309d9952f086099f88f473b125ee195df7fef31f7b55a9d88509b. Nov 1 01:01:36.312995 containerd[1834]: time="2025-11-01T01:01:36.312971098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-13ad226fb7,Uid:b6ddb596a661251d1023235a1d735534,Namespace:kube-system,Attempt:0,} returns sandbox id \"03ce1781d5f81ead3818a39fdb2b12e6a4e10c5723027fb99a208f710cc8faac\"" Nov 1 01:01:36.313400 containerd[1834]: time="2025-11-01T01:01:36.313382114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-13ad226fb7,Uid:258c3aaefb23e2561e9fe0da0a69970a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1935ba249949486be933dbceb4cac6c095a29ede1f30df6a86965515e47051c1\"" Nov 1 01:01:36.313779 containerd[1834]: time="2025-11-01T01:01:36.313764905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-13ad226fb7,Uid:7e9c42a70baee4d1044b2fa8b276af66,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e876614be1309d9952f086099f88f473b125ee195df7fef31f7b55a9d88509b\"" Nov 1 01:01:36.315509 containerd[1834]: time="2025-11-01T01:01:36.315467339Z" level=info msg="CreateContainer within sandbox \"1935ba249949486be933dbceb4cac6c095a29ede1f30df6a86965515e47051c1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 01:01:36.315850 containerd[1834]: time="2025-11-01T01:01:36.315833369Z" level=info msg="CreateContainer within sandbox \"03ce1781d5f81ead3818a39fdb2b12e6a4e10c5723027fb99a208f710cc8faac\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 01:01:36.316374 containerd[1834]: time="2025-11-01T01:01:36.316361434Z" level=info msg="CreateContainer within sandbox \"5e876614be1309d9952f086099f88f473b125ee195df7fef31f7b55a9d88509b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 01:01:36.340201 containerd[1834]: time="2025-11-01T01:01:36.340187568Z" level=info msg="CreateContainer within sandbox \"03ce1781d5f81ead3818a39fdb2b12e6a4e10c5723027fb99a208f710cc8faac\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8c8192f395e3cfeec217b4138895a9e8ccebb0d16a4c24bc7c0bd91b64a98a88\"" Nov 1 01:01:36.340478 containerd[1834]: time="2025-11-01T01:01:36.340462481Z" level=info msg="StartContainer for \"8c8192f395e3cfeec217b4138895a9e8ccebb0d16a4c24bc7c0bd91b64a98a88\"" Nov 1 01:01:36.343845 containerd[1834]: time="2025-11-01T01:01:36.343822056Z" level=info msg="CreateContainer within sandbox \"5e876614be1309d9952f086099f88f473b125ee195df7fef31f7b55a9d88509b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bd2651b664ac3937c5de477486614904e22d21ef68f05f0fe7caf718086b1d6d\"" Nov 1 01:01:36.343967 containerd[1834]: time="2025-11-01T01:01:36.343954219Z" level=info msg="CreateContainer within sandbox \"1935ba249949486be933dbceb4cac6c095a29ede1f30df6a86965515e47051c1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9babfa2f7168d4af6727f922985add035d4525ea51614a32b1bd3ef50564b86e\"" Nov 1 01:01:36.344053 containerd[1834]: time="2025-11-01T01:01:36.344039948Z" level=info msg="StartContainer for \"bd2651b664ac3937c5de477486614904e22d21ef68f05f0fe7caf718086b1d6d\"" Nov 1 01:01:36.344121 containerd[1834]: time="2025-11-01T01:01:36.344110591Z" level=info msg="StartContainer for \"9babfa2f7168d4af6727f922985add035d4525ea51614a32b1bd3ef50564b86e\"" Nov 1 01:01:36.370435 systemd[1]: Started cri-containerd-8c8192f395e3cfeec217b4138895a9e8ccebb0d16a4c24bc7c0bd91b64a98a88.scope - libcontainer container 8c8192f395e3cfeec217b4138895a9e8ccebb0d16a4c24bc7c0bd91b64a98a88. Nov 1 01:01:36.372774 systemd[1]: Started cri-containerd-9babfa2f7168d4af6727f922985add035d4525ea51614a32b1bd3ef50564b86e.scope - libcontainer container 9babfa2f7168d4af6727f922985add035d4525ea51614a32b1bd3ef50564b86e. Nov 1 01:01:36.373456 systemd[1]: Started cri-containerd-bd2651b664ac3937c5de477486614904e22d21ef68f05f0fe7caf718086b1d6d.scope - libcontainer container bd2651b664ac3937c5de477486614904e22d21ef68f05f0fe7caf718086b1d6d. Nov 1 01:01:36.400474 containerd[1834]: time="2025-11-01T01:01:36.400453017Z" level=info msg="StartContainer for \"bd2651b664ac3937c5de477486614904e22d21ef68f05f0fe7caf718086b1d6d\" returns successfully" Nov 1 01:01:36.400549 containerd[1834]: time="2025-11-01T01:01:36.400508086Z" level=info msg="StartContainer for \"8c8192f395e3cfeec217b4138895a9e8ccebb0d16a4c24bc7c0bd91b64a98a88\" returns successfully" Nov 1 01:01:36.402122 containerd[1834]: time="2025-11-01T01:01:36.402103660Z" level=info msg="StartContainer for \"9babfa2f7168d4af6727f922985add035d4525ea51614a32b1bd3ef50564b86e\" returns successfully" Nov 1 01:01:36.699234 kubelet[2665]: I1101 01:01:36.699216 2665 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:37.154017 kubelet[2665]: E1101 01:01:37.153967 2665 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-13ad226fb7\" not found" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:37.154241 kubelet[2665]: E1101 01:01:37.154016 2665 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-13ad226fb7\" not found" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:37.154725 kubelet[2665]: E1101 01:01:37.154718 2665 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-13ad226fb7\" not found" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:37.499991 kubelet[2665]: E1101 01:01:37.499969 2665 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-13ad226fb7\" not found" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:37.706341 kubelet[2665]: I1101 01:01:37.706307 2665 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:37.739999 kubelet[2665]: I1101 01:01:37.739928 2665 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:37.753119 kubelet[2665]: E1101 01:01:37.752824 2665 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-13ad226fb7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:37.753119 kubelet[2665]: I1101 01:01:37.752891 2665 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:37.756827 kubelet[2665]: E1101 01:01:37.756722 2665 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-13ad226fb7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:37.756827 kubelet[2665]: I1101 01:01:37.756771 2665 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:37.760836 kubelet[2665]: E1101 01:01:37.760731 2665 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-13ad226fb7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:38.131752 kubelet[2665]: I1101 01:01:38.131503 2665 apiserver.go:52] "Watching apiserver" Nov 1 01:01:38.139600 kubelet[2665]: I1101 01:01:38.139555 2665 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 01:01:38.157275 kubelet[2665]: I1101 01:01:38.157201 2665 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:38.158183 kubelet[2665]: I1101 01:01:38.157457 2665 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:38.161111 kubelet[2665]: E1101 01:01:38.161056 2665 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-13ad226fb7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:38.161428 kubelet[2665]: E1101 01:01:38.161374 2665 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-13ad226fb7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:39.159777 kubelet[2665]: I1101 01:01:39.159723 2665 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:39.166355 kubelet[2665]: I1101 01:01:39.166290 2665 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:01:40.210583 systemd[1]: Reloading requested from client PID 2991 ('systemctl') (unit session-11.scope)... Nov 1 01:01:40.210590 systemd[1]: Reloading... Nov 1 01:01:40.253283 zram_generator::config[3030]: No configuration found. Nov 1 01:01:40.322916 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:01:40.391910 systemd[1]: Reloading finished in 181 ms. Nov 1 01:01:40.412814 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:01:40.426943 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 01:01:40.427047 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:01:40.444686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:01:40.715556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:01:40.717904 (kubelet)[3094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 01:01:40.738347 kubelet[3094]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:01:40.738347 kubelet[3094]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:01:40.738347 kubelet[3094]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:01:40.738612 kubelet[3094]: I1101 01:01:40.738381 3094 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:01:40.742477 kubelet[3094]: I1101 01:01:40.742461 3094 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 1 01:01:40.742477 kubelet[3094]: I1101 01:01:40.742474 3094 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:01:40.742621 kubelet[3094]: I1101 01:01:40.742613 3094 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 01:01:40.743419 kubelet[3094]: I1101 01:01:40.743410 3094 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 01:01:40.744822 kubelet[3094]: I1101 01:01:40.744814 3094 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:01:40.746925 kubelet[3094]: E1101 01:01:40.746906 3094 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:01:40.746925 kubelet[3094]: I1101 01:01:40.746925 3094 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 01:01:40.754072 kubelet[3094]: I1101 01:01:40.754035 3094 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 01:01:40.754174 kubelet[3094]: I1101 01:01:40.754161 3094 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:01:40.754286 kubelet[3094]: I1101 01:01:40.754175 3094 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-13ad226fb7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 01:01:40.754369 kubelet[3094]: I1101 01:01:40.754290 3094 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:01:40.754369 kubelet[3094]: I1101 01:01:40.754297 3094 container_manager_linux.go:303] "Creating device plugin manager" Nov 1 01:01:40.754369 kubelet[3094]: I1101 01:01:40.754329 3094 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:01:40.754463 kubelet[3094]: I1101 01:01:40.754456 3094 kubelet.go:480] "Attempting to sync node with API server" Nov 1 01:01:40.754492 kubelet[3094]: I1101 01:01:40.754464 3094 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:01:40.754492 kubelet[3094]: I1101 01:01:40.754480 3094 kubelet.go:386] "Adding apiserver pod source" Nov 1 01:01:40.754531 kubelet[3094]: I1101 01:01:40.754491 3094 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:01:40.755108 kubelet[3094]: I1101 01:01:40.755089 3094 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 01:01:40.755513 kubelet[3094]: I1101 01:01:40.755504 3094 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 01:01:40.757037 kubelet[3094]: I1101 01:01:40.757018 3094 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 01:01:40.757091 kubelet[3094]: I1101 01:01:40.757073 3094 server.go:1289] "Started kubelet" Nov 1 01:01:40.757335 kubelet[3094]: I1101 01:01:40.757306 3094 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:01:40.758018 kubelet[3094]: I1101 01:01:40.757294 3094 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:01:40.758018 kubelet[3094]: I1101 01:01:40.757691 3094 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:01:40.759822 kubelet[3094]: I1101 01:01:40.759810 3094 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:01:40.759888 kubelet[3094]: I1101 01:01:40.759858 3094 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:01:40.759888 kubelet[3094]: E1101 01:01:40.759869 3094 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-13ad226fb7\" not found" Nov 1 01:01:40.759888 kubelet[3094]: I1101 01:01:40.759873 3094 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 01:01:40.759888 kubelet[3094]: I1101 01:01:40.759885 3094 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 01:01:40.760002 kubelet[3094]: I1101 01:01:40.759994 3094 reconciler.go:26] "Reconciler: start to sync state" Nov 1 01:01:40.760103 kubelet[3094]: E1101 01:01:40.760086 3094 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:01:40.760983 kubelet[3094]: I1101 01:01:40.760208 3094 server.go:317] "Adding debug handlers to kubelet server" Nov 1 01:01:40.760983 kubelet[3094]: I1101 01:01:40.760317 3094 factory.go:223] Registration of the systemd container factory successfully Nov 1 01:01:40.760983 kubelet[3094]: I1101 01:01:40.760378 3094 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:01:40.763179 kubelet[3094]: I1101 01:01:40.763164 3094 factory.go:223] Registration of the containerd container factory successfully Nov 1 01:01:40.767316 kubelet[3094]: I1101 01:01:40.767295 3094 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 1 01:01:40.767848 kubelet[3094]: I1101 01:01:40.767836 3094 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 1 01:01:40.767848 kubelet[3094]: I1101 01:01:40.767848 3094 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 1 01:01:40.767906 kubelet[3094]: I1101 01:01:40.767860 3094 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:01:40.767906 kubelet[3094]: I1101 01:01:40.767864 3094 kubelet.go:2436] "Starting kubelet main sync loop" Nov 1 01:01:40.767906 kubelet[3094]: E1101 01:01:40.767891 3094 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:01:40.777848 kubelet[3094]: I1101 01:01:40.777832 3094 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:01:40.777848 kubelet[3094]: I1101 01:01:40.777842 3094 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:01:40.777955 kubelet[3094]: I1101 01:01:40.777860 3094 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:01:40.777988 kubelet[3094]: I1101 01:01:40.777968 3094 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 01:01:40.777988 kubelet[3094]: I1101 01:01:40.777978 3094 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 01:01:40.778046 kubelet[3094]: I1101 01:01:40.777993 3094 policy_none.go:49] "None policy: Start" Nov 1 01:01:40.778046 kubelet[3094]: I1101 01:01:40.778001 3094 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 01:01:40.778046 kubelet[3094]: I1101 01:01:40.778009 3094 state_mem.go:35] "Initializing new in-memory state store" Nov 1 01:01:40.778144 kubelet[3094]: I1101 01:01:40.778098 3094 state_mem.go:75] "Updated machine memory state" Nov 1 01:01:40.780120 kubelet[3094]: E1101 01:01:40.780112 3094 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 01:01:40.780244 kubelet[3094]: I1101 01:01:40.780204 3094 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:01:40.780244 kubelet[3094]: I1101 01:01:40.780214 3094 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:01:40.780345 kubelet[3094]: I1101 01:01:40.780332 3094 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:01:40.780701 kubelet[3094]: E1101 01:01:40.780649 3094 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:01:40.870388 kubelet[3094]: I1101 01:01:40.870288 3094 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.870604 kubelet[3094]: I1101 01:01:40.870440 3094 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.870604 kubelet[3094]: I1101 01:01:40.870310 3094 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.878423 kubelet[3094]: I1101 01:01:40.878368 3094 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:01:40.878750 kubelet[3094]: I1101 01:01:40.878701 3094 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:01:40.879258 kubelet[3094]: I1101 01:01:40.879193 3094 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:01:40.879383 kubelet[3094]: E1101 01:01:40.879341 3094 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-13ad226fb7\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.886702 kubelet[3094]: I1101 01:01:40.886636 3094 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.895841 kubelet[3094]: I1101 01:01:40.895795 3094 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.896026 kubelet[3094]: I1101 01:01:40.895941 3094 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.961014 kubelet[3094]: I1101 01:01:40.960943 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/258c3aaefb23e2561e9fe0da0a69970a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-13ad226fb7\" (UID: \"258c3aaefb23e2561e9fe0da0a69970a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.961319 kubelet[3094]: I1101 01:01:40.961029 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b6ddb596a661251d1023235a1d735534-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-13ad226fb7\" (UID: \"b6ddb596a661251d1023235a1d735534\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.961319 kubelet[3094]: I1101 01:01:40.961086 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/258c3aaefb23e2561e9fe0da0a69970a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-13ad226fb7\" (UID: \"258c3aaefb23e2561e9fe0da0a69970a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.961319 kubelet[3094]: I1101 01:01:40.961204 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/258c3aaefb23e2561e9fe0da0a69970a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-13ad226fb7\" (UID: \"258c3aaefb23e2561e9fe0da0a69970a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.961716 kubelet[3094]: I1101 01:01:40.961313 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/258c3aaefb23e2561e9fe0da0a69970a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-13ad226fb7\" (UID: \"258c3aaefb23e2561e9fe0da0a69970a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.961716 kubelet[3094]: I1101 01:01:40.961443 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e9c42a70baee4d1044b2fa8b276af66-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-13ad226fb7\" (UID: \"7e9c42a70baee4d1044b2fa8b276af66\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.961716 kubelet[3094]: I1101 01:01:40.961510 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b6ddb596a661251d1023235a1d735534-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-13ad226fb7\" (UID: \"b6ddb596a661251d1023235a1d735534\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.961716 kubelet[3094]: I1101 01:01:40.961584 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b6ddb596a661251d1023235a1d735534-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-13ad226fb7\" (UID: \"b6ddb596a661251d1023235a1d735534\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:40.961716 kubelet[3094]: I1101 01:01:40.961687 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/258c3aaefb23e2561e9fe0da0a69970a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-13ad226fb7\" (UID: \"258c3aaefb23e2561e9fe0da0a69970a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:41.755622 kubelet[3094]: I1101 01:01:41.755600 3094 apiserver.go:52] "Watching apiserver" Nov 1 01:01:41.760350 kubelet[3094]: I1101 01:01:41.760287 3094 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 01:01:41.773465 kubelet[3094]: I1101 01:01:41.773413 3094 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:41.773644 kubelet[3094]: I1101 01:01:41.773512 3094 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:41.773644 kubelet[3094]: I1101 01:01:41.773570 3094 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:41.780910 kubelet[3094]: I1101 01:01:41.780856 3094 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:01:41.781187 kubelet[3094]: E1101 01:01:41.781003 3094 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-13ad226fb7\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:41.781406 kubelet[3094]: I1101 01:01:41.781362 3094 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:01:41.781622 kubelet[3094]: I1101 01:01:41.781369 3094 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 01:01:41.781622 kubelet[3094]: E1101 01:01:41.781513 3094 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-13ad226fb7\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:41.781622 kubelet[3094]: E1101 01:01:41.781600 3094 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-13ad226fb7\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" Nov 1 01:01:41.821689 kubelet[3094]: I1101 01:01:41.821637 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-13ad226fb7" podStartSLOduration=1.821604093 podStartE2EDuration="1.821604093s" podCreationTimestamp="2025-11-01 01:01:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:01:41.821579327 +0000 UTC m=+1.101716117" watchObservedRunningTime="2025-11-01 01:01:41.821604093 +0000 UTC m=+1.101740883" Nov 1 01:01:41.826948 kubelet[3094]: I1101 01:01:41.826909 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-13ad226fb7" podStartSLOduration=1.826894227 podStartE2EDuration="1.826894227s" podCreationTimestamp="2025-11-01 01:01:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:01:41.82684947 +0000 UTC m=+1.106986258" watchObservedRunningTime="2025-11-01 01:01:41.826894227 +0000 UTC m=+1.107031008" Nov 1 01:01:41.836522 kubelet[3094]: I1101 01:01:41.836460 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-13ad226fb7" podStartSLOduration=2.836447469 podStartE2EDuration="2.836447469s" podCreationTimestamp="2025-11-01 01:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:01:41.831694168 +0000 UTC m=+1.111830951" watchObservedRunningTime="2025-11-01 01:01:41.836447469 +0000 UTC m=+1.116584245" Nov 1 01:01:45.535630 kubelet[3094]: I1101 01:01:45.535520 3094 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 01:01:45.536581 containerd[1834]: time="2025-11-01T01:01:45.536261328Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 01:01:45.537259 kubelet[3094]: I1101 01:01:45.536698 3094 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 01:01:46.598254 systemd[1]: Created slice kubepods-besteffort-pod10f6ab24_c78b_40ae_a920_9f176cd89372.slice - libcontainer container kubepods-besteffort-pod10f6ab24_c78b_40ae_a920_9f176cd89372.slice. Nov 1 01:01:46.612493 systemd[1]: Created slice kubepods-besteffort-pod125ed8f7_931c_4c1d_99dc_0d95ef74be12.slice - libcontainer container kubepods-besteffort-pod125ed8f7_931c_4c1d_99dc_0d95ef74be12.slice. Nov 1 01:01:46.704879 kubelet[3094]: I1101 01:01:46.704735 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djspd\" (UniqueName: \"kubernetes.io/projected/10f6ab24-c78b-40ae-a920-9f176cd89372-kube-api-access-djspd\") pod \"tigera-operator-7dcd859c48-kqwlg\" (UID: \"10f6ab24-c78b-40ae-a920-9f176cd89372\") " pod="tigera-operator/tigera-operator-7dcd859c48-kqwlg" Nov 1 01:01:46.705723 kubelet[3094]: I1101 01:01:46.704885 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/10f6ab24-c78b-40ae-a920-9f176cd89372-var-lib-calico\") pod \"tigera-operator-7dcd859c48-kqwlg\" (UID: \"10f6ab24-c78b-40ae-a920-9f176cd89372\") " pod="tigera-operator/tigera-operator-7dcd859c48-kqwlg" Nov 1 01:01:46.705723 kubelet[3094]: I1101 01:01:46.704974 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/125ed8f7-931c-4c1d-99dc-0d95ef74be12-xtables-lock\") pod \"kube-proxy-v82hr\" (UID: \"125ed8f7-931c-4c1d-99dc-0d95ef74be12\") " pod="kube-system/kube-proxy-v82hr" Nov 1 01:01:46.705723 kubelet[3094]: I1101 01:01:46.705075 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/125ed8f7-931c-4c1d-99dc-0d95ef74be12-lib-modules\") pod \"kube-proxy-v82hr\" (UID: \"125ed8f7-931c-4c1d-99dc-0d95ef74be12\") " pod="kube-system/kube-proxy-v82hr" Nov 1 01:01:46.705723 kubelet[3094]: I1101 01:01:46.705152 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8k8q\" (UniqueName: \"kubernetes.io/projected/125ed8f7-931c-4c1d-99dc-0d95ef74be12-kube-api-access-f8k8q\") pod \"kube-proxy-v82hr\" (UID: \"125ed8f7-931c-4c1d-99dc-0d95ef74be12\") " pod="kube-system/kube-proxy-v82hr" Nov 1 01:01:46.705723 kubelet[3094]: I1101 01:01:46.705241 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/125ed8f7-931c-4c1d-99dc-0d95ef74be12-kube-proxy\") pod \"kube-proxy-v82hr\" (UID: \"125ed8f7-931c-4c1d-99dc-0d95ef74be12\") " pod="kube-system/kube-proxy-v82hr" Nov 1 01:01:46.912346 containerd[1834]: time="2025-11-01T01:01:46.912100459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kqwlg,Uid:10f6ab24-c78b-40ae-a920-9f176cd89372,Namespace:tigera-operator,Attempt:0,}" Nov 1 01:01:46.915396 containerd[1834]: time="2025-11-01T01:01:46.915278266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v82hr,Uid:125ed8f7-931c-4c1d-99dc-0d95ef74be12,Namespace:kube-system,Attempt:0,}" Nov 1 01:01:46.925966 containerd[1834]: time="2025-11-01T01:01:46.925921145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:46.925966 containerd[1834]: time="2025-11-01T01:01:46.925950518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:46.926103 containerd[1834]: time="2025-11-01T01:01:46.925966269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:46.926250 containerd[1834]: time="2025-11-01T01:01:46.926227863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:46.926553 containerd[1834]: time="2025-11-01T01:01:46.926525277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:46.926576 containerd[1834]: time="2025-11-01T01:01:46.926551434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:46.926576 containerd[1834]: time="2025-11-01T01:01:46.926558860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:46.926613 containerd[1834]: time="2025-11-01T01:01:46.926599857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:46.949568 systemd[1]: Started cri-containerd-1fe1e0f33061d87c4d72d3078d085475411a8e421c680fd334748ec5175717ad.scope - libcontainer container 1fe1e0f33061d87c4d72d3078d085475411a8e421c680fd334748ec5175717ad. Nov 1 01:01:46.950302 systemd[1]: Started cri-containerd-3cbb7a2f58f871f3691b2c01416f7aa16e45006723d37f17f2cf4a17f8740d74.scope - libcontainer container 3cbb7a2f58f871f3691b2c01416f7aa16e45006723d37f17f2cf4a17f8740d74. Nov 1 01:01:46.961588 containerd[1834]: time="2025-11-01T01:01:46.961537478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v82hr,Uid:125ed8f7-931c-4c1d-99dc-0d95ef74be12,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cbb7a2f58f871f3691b2c01416f7aa16e45006723d37f17f2cf4a17f8740d74\"" Nov 1 01:01:46.965637 containerd[1834]: time="2025-11-01T01:01:46.965614062Z" level=info msg="CreateContainer within sandbox \"3cbb7a2f58f871f3691b2c01416f7aa16e45006723d37f17f2cf4a17f8740d74\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 01:01:46.972207 containerd[1834]: time="2025-11-01T01:01:46.972186217Z" level=info msg="CreateContainer within sandbox \"3cbb7a2f58f871f3691b2c01416f7aa16e45006723d37f17f2cf4a17f8740d74\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2c29046803b7237018341015786c924ecb4837fe385764186735c2fdf55814e9\"" Nov 1 01:01:46.972460 containerd[1834]: time="2025-11-01T01:01:46.972446413Z" level=info msg="StartContainer for \"2c29046803b7237018341015786c924ecb4837fe385764186735c2fdf55814e9\"" Nov 1 01:01:46.974903 containerd[1834]: time="2025-11-01T01:01:46.974880921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-kqwlg,Uid:10f6ab24-c78b-40ae-a920-9f176cd89372,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1fe1e0f33061d87c4d72d3078d085475411a8e421c680fd334748ec5175717ad\"" Nov 1 01:01:46.975592 containerd[1834]: time="2025-11-01T01:01:46.975581258Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 01:01:46.999453 systemd[1]: Started cri-containerd-2c29046803b7237018341015786c924ecb4837fe385764186735c2fdf55814e9.scope - libcontainer container 2c29046803b7237018341015786c924ecb4837fe385764186735c2fdf55814e9. Nov 1 01:01:47.014186 containerd[1834]: time="2025-11-01T01:01:47.014162303Z" level=info msg="StartContainer for \"2c29046803b7237018341015786c924ecb4837fe385764186735c2fdf55814e9\" returns successfully" Nov 1 01:01:47.802955 kubelet[3094]: I1101 01:01:47.802900 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v82hr" podStartSLOduration=1.802891046 podStartE2EDuration="1.802891046s" podCreationTimestamp="2025-11-01 01:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:01:47.802848476 +0000 UTC m=+7.082985255" watchObservedRunningTime="2025-11-01 01:01:47.802891046 +0000 UTC m=+7.083027820" Nov 1 01:01:48.546342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1401278321.mount: Deactivated successfully. Nov 1 01:01:49.643507 containerd[1834]: time="2025-11-01T01:01:49.643431229Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:49.726530 containerd[1834]: time="2025-11-01T01:01:49.726444289Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 01:01:49.766323 containerd[1834]: time="2025-11-01T01:01:49.766186542Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:49.824389 containerd[1834]: time="2025-11-01T01:01:49.824328875Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:01:49.824847 containerd[1834]: time="2025-11-01T01:01:49.824797081Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.849197795s" Nov 1 01:01:49.824847 containerd[1834]: time="2025-11-01T01:01:49.824843332Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 01:01:49.889687 containerd[1834]: time="2025-11-01T01:01:49.889615227Z" level=info msg="CreateContainer within sandbox \"1fe1e0f33061d87c4d72d3078d085475411a8e421c680fd334748ec5175717ad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 01:01:50.254240 containerd[1834]: time="2025-11-01T01:01:50.254158570Z" level=info msg="CreateContainer within sandbox \"1fe1e0f33061d87c4d72d3078d085475411a8e421c680fd334748ec5175717ad\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"048c58c63e862ae7fe93409fbe8551891e8ab9636a4d3a185fdeb409a8a3e1f9\"" Nov 1 01:01:50.254728 containerd[1834]: time="2025-11-01T01:01:50.254684888Z" level=info msg="StartContainer for \"048c58c63e862ae7fe93409fbe8551891e8ab9636a4d3a185fdeb409a8a3e1f9\"" Nov 1 01:01:50.299659 systemd[1]: Started cri-containerd-048c58c63e862ae7fe93409fbe8551891e8ab9636a4d3a185fdeb409a8a3e1f9.scope - libcontainer container 048c58c63e862ae7fe93409fbe8551891e8ab9636a4d3a185fdeb409a8a3e1f9. Nov 1 01:01:50.379113 containerd[1834]: time="2025-11-01T01:01:50.379055723Z" level=info msg="StartContainer for \"048c58c63e862ae7fe93409fbe8551891e8ab9636a4d3a185fdeb409a8a3e1f9\" returns successfully" Nov 1 01:01:50.824769 kubelet[3094]: I1101 01:01:50.824620 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-kqwlg" podStartSLOduration=1.974551843 podStartE2EDuration="4.824570589s" podCreationTimestamp="2025-11-01 01:01:46 +0000 UTC" firstStartedPulling="2025-11-01 01:01:46.975429492 +0000 UTC m=+6.255566265" lastFinishedPulling="2025-11-01 01:01:49.825448236 +0000 UTC m=+9.105585011" observedRunningTime="2025-11-01 01:01:50.824519124 +0000 UTC m=+10.104655975" watchObservedRunningTime="2025-11-01 01:01:50.824570589 +0000 UTC m=+10.104707444" Nov 1 01:01:52.103458 update_engine[1821]: I20251101 01:01:52.103389 1821 update_attempter.cc:509] Updating boot flags... Nov 1 01:01:52.137279 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 34 scanned by (udev-worker) (3542) Nov 1 01:01:52.170235 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 34 scanned by (udev-worker) (3538) Nov 1 01:01:52.189236 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 34 scanned by (udev-worker) (3538) Nov 1 01:01:54.723844 sudo[2110]: pam_unix(sudo:session): session closed for user root Nov 1 01:01:54.724888 sshd[2107]: pam_unix(sshd:session): session closed for user core Nov 1 01:01:54.727151 systemd[1]: sshd@8-145.40.82.59:22-139.178.89.65:48850.service: Deactivated successfully. Nov 1 01:01:54.728163 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 01:01:54.728274 systemd[1]: session-11.scope: Consumed 3.881s CPU time, 172.0M memory peak, 0B memory swap peak. Nov 1 01:01:54.728581 systemd-logind[1816]: Session 11 logged out. Waiting for processes to exit. Nov 1 01:01:54.729150 systemd-logind[1816]: Removed session 11. Nov 1 01:01:58.759801 systemd[1]: Created slice kubepods-besteffort-pod37e7d204_d4d6_4679_a1ea_f5b7a514db76.slice - libcontainer container kubepods-besteffort-pod37e7d204_d4d6_4679_a1ea_f5b7a514db76.slice. Nov 1 01:01:58.786410 kubelet[3094]: I1101 01:01:58.786350 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdl8l\" (UniqueName: \"kubernetes.io/projected/37e7d204-d4d6-4679-a1ea-f5b7a514db76-kube-api-access-gdl8l\") pod \"calico-typha-94bd4856b-pbmq5\" (UID: \"37e7d204-d4d6-4679-a1ea-f5b7a514db76\") " pod="calico-system/calico-typha-94bd4856b-pbmq5" Nov 1 01:01:58.786410 kubelet[3094]: I1101 01:01:58.786395 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37e7d204-d4d6-4679-a1ea-f5b7a514db76-tigera-ca-bundle\") pod \"calico-typha-94bd4856b-pbmq5\" (UID: \"37e7d204-d4d6-4679-a1ea-f5b7a514db76\") " pod="calico-system/calico-typha-94bd4856b-pbmq5" Nov 1 01:01:58.786837 kubelet[3094]: I1101 01:01:58.786432 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/37e7d204-d4d6-4679-a1ea-f5b7a514db76-typha-certs\") pod \"calico-typha-94bd4856b-pbmq5\" (UID: \"37e7d204-d4d6-4679-a1ea-f5b7a514db76\") " pod="calico-system/calico-typha-94bd4856b-pbmq5" Nov 1 01:01:58.954120 systemd[1]: Created slice kubepods-besteffort-podfd4ea53a_29b6_463c_abe1_9384d539e061.slice - libcontainer container kubepods-besteffort-podfd4ea53a_29b6_463c_abe1_9384d539e061.slice. Nov 1 01:01:58.988686 kubelet[3094]: I1101 01:01:58.988570 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fd4ea53a-29b6-463c-abe1-9384d539e061-policysync\") pod \"calico-node-vg458\" (UID: \"fd4ea53a-29b6-463c-abe1-9384d539e061\") " pod="calico-system/calico-node-vg458" Nov 1 01:01:58.988686 kubelet[3094]: I1101 01:01:58.988679 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd4ea53a-29b6-463c-abe1-9384d539e061-tigera-ca-bundle\") pod \"calico-node-vg458\" (UID: \"fd4ea53a-29b6-463c-abe1-9384d539e061\") " pod="calico-system/calico-node-vg458" Nov 1 01:01:58.989068 kubelet[3094]: I1101 01:01:58.988745 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fd4ea53a-29b6-463c-abe1-9384d539e061-flexvol-driver-host\") pod \"calico-node-vg458\" (UID: \"fd4ea53a-29b6-463c-abe1-9384d539e061\") " pod="calico-system/calico-node-vg458" Nov 1 01:01:58.989068 kubelet[3094]: I1101 01:01:58.988911 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fd4ea53a-29b6-463c-abe1-9384d539e061-var-lib-calico\") pod \"calico-node-vg458\" (UID: \"fd4ea53a-29b6-463c-abe1-9384d539e061\") " pod="calico-system/calico-node-vg458" Nov 1 01:01:58.989068 kubelet[3094]: I1101 01:01:58.989011 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9f6p\" (UniqueName: \"kubernetes.io/projected/fd4ea53a-29b6-463c-abe1-9384d539e061-kube-api-access-l9f6p\") pod \"calico-node-vg458\" (UID: \"fd4ea53a-29b6-463c-abe1-9384d539e061\") " pod="calico-system/calico-node-vg458" Nov 1 01:01:58.989428 kubelet[3094]: I1101 01:01:58.989089 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fd4ea53a-29b6-463c-abe1-9384d539e061-node-certs\") pod \"calico-node-vg458\" (UID: \"fd4ea53a-29b6-463c-abe1-9384d539e061\") " pod="calico-system/calico-node-vg458" Nov 1 01:01:58.989428 kubelet[3094]: I1101 01:01:58.989165 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fd4ea53a-29b6-463c-abe1-9384d539e061-var-run-calico\") pod \"calico-node-vg458\" (UID: \"fd4ea53a-29b6-463c-abe1-9384d539e061\") " pod="calico-system/calico-node-vg458" Nov 1 01:01:58.989428 kubelet[3094]: I1101 01:01:58.989288 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fd4ea53a-29b6-463c-abe1-9384d539e061-cni-bin-dir\") pod \"calico-node-vg458\" (UID: \"fd4ea53a-29b6-463c-abe1-9384d539e061\") " pod="calico-system/calico-node-vg458" Nov 1 01:01:58.989428 kubelet[3094]: I1101 01:01:58.989356 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fd4ea53a-29b6-463c-abe1-9384d539e061-cni-log-dir\") pod \"calico-node-vg458\" (UID: \"fd4ea53a-29b6-463c-abe1-9384d539e061\") " pod="calico-system/calico-node-vg458" Nov 1 01:01:58.989806 kubelet[3094]: I1101 01:01:58.989429 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fd4ea53a-29b6-463c-abe1-9384d539e061-cni-net-dir\") pod \"calico-node-vg458\" (UID: \"fd4ea53a-29b6-463c-abe1-9384d539e061\") " pod="calico-system/calico-node-vg458" Nov 1 01:01:58.989806 kubelet[3094]: I1101 01:01:58.989481 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd4ea53a-29b6-463c-abe1-9384d539e061-xtables-lock\") pod \"calico-node-vg458\" (UID: \"fd4ea53a-29b6-463c-abe1-9384d539e061\") " pod="calico-system/calico-node-vg458" Nov 1 01:01:58.989806 kubelet[3094]: I1101 01:01:58.989527 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd4ea53a-29b6-463c-abe1-9384d539e061-lib-modules\") pod \"calico-node-vg458\" (UID: \"fd4ea53a-29b6-463c-abe1-9384d539e061\") " pod="calico-system/calico-node-vg458" Nov 1 01:01:59.064092 containerd[1834]: time="2025-11-01T01:01:59.063830135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-94bd4856b-pbmq5,Uid:37e7d204-d4d6-4679-a1ea-f5b7a514db76,Namespace:calico-system,Attempt:0,}" Nov 1 01:01:59.074541 containerd[1834]: time="2025-11-01T01:01:59.074501132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:59.074541 containerd[1834]: time="2025-11-01T01:01:59.074531052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:59.074541 containerd[1834]: time="2025-11-01T01:01:59.074538478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:59.074652 containerd[1834]: time="2025-11-01T01:01:59.074576717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:59.090534 kubelet[3094]: E1101 01:01:59.090513 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.090534 kubelet[3094]: W1101 01:01:59.090526 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.090633 kubelet[3094]: E1101 01:01:59.090540 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.090715 kubelet[3094]: E1101 01:01:59.090682 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.090715 kubelet[3094]: W1101 01:01:59.090687 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.090715 kubelet[3094]: E1101 01:01:59.090693 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.090820 kubelet[3094]: E1101 01:01:59.090814 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.090820 kubelet[3094]: W1101 01:01:59.090819 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.090869 kubelet[3094]: E1101 01:01:59.090824 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.090940 kubelet[3094]: E1101 01:01:59.090931 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.090940 kubelet[3094]: W1101 01:01:59.090938 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.090940 kubelet[3094]: E1101 01:01:59.090943 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.091035 kubelet[3094]: E1101 01:01:59.091030 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.091035 kubelet[3094]: W1101 01:01:59.091035 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.091072 kubelet[3094]: E1101 01:01:59.091040 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.091142 kubelet[3094]: E1101 01:01:59.091137 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.091173 kubelet[3094]: W1101 01:01:59.091142 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.091173 kubelet[3094]: E1101 01:01:59.091147 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.091255 kubelet[3094]: E1101 01:01:59.091249 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.091255 kubelet[3094]: W1101 01:01:59.091254 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.091307 kubelet[3094]: E1101 01:01:59.091259 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.091345 kubelet[3094]: E1101 01:01:59.091339 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.091369 kubelet[3094]: W1101 01:01:59.091345 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.091369 kubelet[3094]: E1101 01:01:59.091350 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.091487 kubelet[3094]: E1101 01:01:59.091481 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.091487 kubelet[3094]: W1101 01:01:59.091486 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.091530 kubelet[3094]: E1101 01:01:59.091491 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.091824 kubelet[3094]: E1101 01:01:59.091816 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.091846 kubelet[3094]: W1101 01:01:59.091825 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.091846 kubelet[3094]: E1101 01:01:59.091833 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.092050 kubelet[3094]: E1101 01:01:59.092044 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.092070 kubelet[3094]: W1101 01:01:59.092050 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.092070 kubelet[3094]: E1101 01:01:59.092057 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.094418 systemd[1]: Started cri-containerd-39af1f1f945db0079edc3ebecd97f38c28692ef5a9a04e46158b0906898bdf4f.scope - libcontainer container 39af1f1f945db0079edc3ebecd97f38c28692ef5a9a04e46158b0906898bdf4f. Nov 1 01:01:59.094906 kubelet[3094]: E1101 01:01:59.094896 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.094906 kubelet[3094]: W1101 01:01:59.094904 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.094968 kubelet[3094]: E1101 01:01:59.094912 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.111361 kubelet[3094]: E1101 01:01:59.111339 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:01:59.122880 containerd[1834]: time="2025-11-01T01:01:59.122851917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-94bd4856b-pbmq5,Uid:37e7d204-d4d6-4679-a1ea-f5b7a514db76,Namespace:calico-system,Attempt:0,} returns sandbox id \"39af1f1f945db0079edc3ebecd97f38c28692ef5a9a04e46158b0906898bdf4f\"" Nov 1 01:01:59.123639 containerd[1834]: time="2025-11-01T01:01:59.123621899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 01:01:59.176552 kubelet[3094]: E1101 01:01:59.176439 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.176552 kubelet[3094]: W1101 01:01:59.176498 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.176552 kubelet[3094]: E1101 01:01:59.176545 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.177200 kubelet[3094]: E1101 01:01:59.177103 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.177200 kubelet[3094]: W1101 01:01:59.177139 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.177200 kubelet[3094]: E1101 01:01:59.177172 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.177856 kubelet[3094]: E1101 01:01:59.177777 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.177856 kubelet[3094]: W1101 01:01:59.177814 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.177856 kubelet[3094]: E1101 01:01:59.177847 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.178591 kubelet[3094]: E1101 01:01:59.178495 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.178591 kubelet[3094]: W1101 01:01:59.178533 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.178591 kubelet[3094]: E1101 01:01:59.178567 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.179257 kubelet[3094]: E1101 01:01:59.179174 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.179257 kubelet[3094]: W1101 01:01:59.179211 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.179546 kubelet[3094]: E1101 01:01:59.179278 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.179888 kubelet[3094]: E1101 01:01:59.179799 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.179888 kubelet[3094]: W1101 01:01:59.179827 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.179888 kubelet[3094]: E1101 01:01:59.179854 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.180343 kubelet[3094]: E1101 01:01:59.180298 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.180343 kubelet[3094]: W1101 01:01:59.180325 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.180575 kubelet[3094]: E1101 01:01:59.180350 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.180842 kubelet[3094]: E1101 01:01:59.180794 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.180842 kubelet[3094]: W1101 01:01:59.180820 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.181060 kubelet[3094]: E1101 01:01:59.180846 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.181375 kubelet[3094]: E1101 01:01:59.181312 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.181375 kubelet[3094]: W1101 01:01:59.181338 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.181375 kubelet[3094]: E1101 01:01:59.181362 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.181971 kubelet[3094]: E1101 01:01:59.181905 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.181971 kubelet[3094]: W1101 01:01:59.181936 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.181971 kubelet[3094]: E1101 01:01:59.181964 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.182583 kubelet[3094]: E1101 01:01:59.182534 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.182583 kubelet[3094]: W1101 01:01:59.182562 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.182792 kubelet[3094]: E1101 01:01:59.182589 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.183142 kubelet[3094]: E1101 01:01:59.183075 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.183142 kubelet[3094]: W1101 01:01:59.183109 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.183142 kubelet[3094]: E1101 01:01:59.183137 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.183751 kubelet[3094]: E1101 01:01:59.183702 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.183751 kubelet[3094]: W1101 01:01:59.183731 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.183949 kubelet[3094]: E1101 01:01:59.183763 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.184313 kubelet[3094]: E1101 01:01:59.184258 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.184313 kubelet[3094]: W1101 01:01:59.184287 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.184539 kubelet[3094]: E1101 01:01:59.184321 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.184906 kubelet[3094]: E1101 01:01:59.184859 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.184906 kubelet[3094]: W1101 01:01:59.184886 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.185123 kubelet[3094]: E1101 01:01:59.184918 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.185528 kubelet[3094]: E1101 01:01:59.185474 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.185528 kubelet[3094]: W1101 01:01:59.185511 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.185759 kubelet[3094]: E1101 01:01:59.185541 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.186144 kubelet[3094]: E1101 01:01:59.186080 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.186144 kubelet[3094]: W1101 01:01:59.186113 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.186386 kubelet[3094]: E1101 01:01:59.186147 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.186720 kubelet[3094]: E1101 01:01:59.186672 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.186720 kubelet[3094]: W1101 01:01:59.186700 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.186925 kubelet[3094]: E1101 01:01:59.186730 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.187317 kubelet[3094]: E1101 01:01:59.187274 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.187317 kubelet[3094]: W1101 01:01:59.187304 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.187540 kubelet[3094]: E1101 01:01:59.187330 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.187937 kubelet[3094]: E1101 01:01:59.187847 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.187937 kubelet[3094]: W1101 01:01:59.187882 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.187937 kubelet[3094]: E1101 01:01:59.187909 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.191673 kubelet[3094]: E1101 01:01:59.191589 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.191673 kubelet[3094]: W1101 01:01:59.191630 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.191673 kubelet[3094]: E1101 01:01:59.191667 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.192010 kubelet[3094]: I1101 01:01:59.191731 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1c2067e6-df38-44d0-9df8-192be51b26fc-registration-dir\") pod \"csi-node-driver-9vnfp\" (UID: \"1c2067e6-df38-44d0-9df8-192be51b26fc\") " pod="calico-system/csi-node-driver-9vnfp" Nov 1 01:01:59.192416 kubelet[3094]: E1101 01:01:59.192329 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.192416 kubelet[3094]: W1101 01:01:59.192377 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.192416 kubelet[3094]: E1101 01:01:59.192414 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.192831 kubelet[3094]: I1101 01:01:59.192476 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1c2067e6-df38-44d0-9df8-192be51b26fc-varrun\") pod \"csi-node-driver-9vnfp\" (UID: \"1c2067e6-df38-44d0-9df8-192be51b26fc\") " pod="calico-system/csi-node-driver-9vnfp" Nov 1 01:01:59.193218 kubelet[3094]: E1101 01:01:59.193128 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.193218 kubelet[3094]: W1101 01:01:59.193186 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.193498 kubelet[3094]: E1101 01:01:59.193256 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.193926 kubelet[3094]: E1101 01:01:59.193841 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.193926 kubelet[3094]: W1101 01:01:59.193879 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.193926 kubelet[3094]: E1101 01:01:59.193914 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.194540 kubelet[3094]: E1101 01:01:59.194461 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.194540 kubelet[3094]: W1101 01:01:59.194496 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.194540 kubelet[3094]: E1101 01:01:59.194525 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.194994 kubelet[3094]: I1101 01:01:59.194600 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzr2j\" (UniqueName: \"kubernetes.io/projected/1c2067e6-df38-44d0-9df8-192be51b26fc-kube-api-access-pzr2j\") pod \"csi-node-driver-9vnfp\" (UID: \"1c2067e6-df38-44d0-9df8-192be51b26fc\") " pod="calico-system/csi-node-driver-9vnfp" Nov 1 01:01:59.195193 kubelet[3094]: E1101 01:01:59.195153 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.195193 kubelet[3094]: W1101 01:01:59.195185 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.195488 kubelet[3094]: E1101 01:01:59.195214 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.195828 kubelet[3094]: E1101 01:01:59.195774 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.195828 kubelet[3094]: W1101 01:01:59.195808 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.196091 kubelet[3094]: E1101 01:01:59.195836 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.196470 kubelet[3094]: E1101 01:01:59.196411 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.196470 kubelet[3094]: W1101 01:01:59.196451 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.196782 kubelet[3094]: E1101 01:01:59.196495 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.196782 kubelet[3094]: I1101 01:01:59.196580 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1c2067e6-df38-44d0-9df8-192be51b26fc-socket-dir\") pod \"csi-node-driver-9vnfp\" (UID: \"1c2067e6-df38-44d0-9df8-192be51b26fc\") " pod="calico-system/csi-node-driver-9vnfp" Nov 1 01:01:59.197200 kubelet[3094]: E1101 01:01:59.197148 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.197200 kubelet[3094]: W1101 01:01:59.197192 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.197504 kubelet[3094]: E1101 01:01:59.197273 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.197821 kubelet[3094]: E1101 01:01:59.197787 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.197938 kubelet[3094]: W1101 01:01:59.197827 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.197938 kubelet[3094]: E1101 01:01:59.197863 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.198465 kubelet[3094]: E1101 01:01:59.198406 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.198465 kubelet[3094]: W1101 01:01:59.198433 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.198672 kubelet[3094]: E1101 01:01:59.198466 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.199007 kubelet[3094]: E1101 01:01:59.198972 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.199007 kubelet[3094]: W1101 01:01:59.199000 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.199339 kubelet[3094]: E1101 01:01:59.199038 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.199697 kubelet[3094]: E1101 01:01:59.199645 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.199697 kubelet[3094]: W1101 01:01:59.199672 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.199940 kubelet[3094]: E1101 01:01:59.199697 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.199940 kubelet[3094]: I1101 01:01:59.199764 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c2067e6-df38-44d0-9df8-192be51b26fc-kubelet-dir\") pod \"csi-node-driver-9vnfp\" (UID: \"1c2067e6-df38-44d0-9df8-192be51b26fc\") " pod="calico-system/csi-node-driver-9vnfp" Nov 1 01:01:59.200549 kubelet[3094]: E1101 01:01:59.200495 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.200785 kubelet[3094]: W1101 01:01:59.200550 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.200785 kubelet[3094]: E1101 01:01:59.200604 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.201341 kubelet[3094]: E1101 01:01:59.201297 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.201341 kubelet[3094]: W1101 01:01:59.201330 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.201693 kubelet[3094]: E1101 01:01:59.201360 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.257088 containerd[1834]: time="2025-11-01T01:01:59.257057329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vg458,Uid:fd4ea53a-29b6-463c-abe1-9384d539e061,Namespace:calico-system,Attempt:0,}" Nov 1 01:01:59.266349 containerd[1834]: time="2025-11-01T01:01:59.266251295Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:01:59.266551 containerd[1834]: time="2025-11-01T01:01:59.266292607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:01:59.266551 containerd[1834]: time="2025-11-01T01:01:59.266491435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:59.266551 containerd[1834]: time="2025-11-01T01:01:59.266535921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:01:59.288341 systemd[1]: Started cri-containerd-c3986b7fad64e0582f1975964dbcca2c5f85b13a40430f4851b31bdcbaad0c00.scope - libcontainer container c3986b7fad64e0582f1975964dbcca2c5f85b13a40430f4851b31bdcbaad0c00. Nov 1 01:01:59.299003 containerd[1834]: time="2025-11-01T01:01:59.298955239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vg458,Uid:fd4ea53a-29b6-463c-abe1-9384d539e061,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3986b7fad64e0582f1975964dbcca2c5f85b13a40430f4851b31bdcbaad0c00\"" Nov 1 01:01:59.300885 kubelet[3094]: E1101 01:01:59.300873 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.300885 kubelet[3094]: W1101 01:01:59.300884 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.300943 kubelet[3094]: E1101 01:01:59.300896 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.301022 kubelet[3094]: E1101 01:01:59.301015 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.301022 kubelet[3094]: W1101 01:01:59.301021 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.301072 kubelet[3094]: E1101 01:01:59.301027 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.301176 kubelet[3094]: E1101 01:01:59.301168 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.301204 kubelet[3094]: W1101 01:01:59.301177 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.301204 kubelet[3094]: E1101 01:01:59.301185 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.301301 kubelet[3094]: E1101 01:01:59.301295 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.301330 kubelet[3094]: W1101 01:01:59.301302 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.301330 kubelet[3094]: E1101 01:01:59.301308 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.301439 kubelet[3094]: E1101 01:01:59.301433 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.301464 kubelet[3094]: W1101 01:01:59.301439 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.301464 kubelet[3094]: E1101 01:01:59.301445 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.301561 kubelet[3094]: E1101 01:01:59.301554 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.301585 kubelet[3094]: W1101 01:01:59.301562 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.301585 kubelet[3094]: E1101 01:01:59.301568 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.301685 kubelet[3094]: E1101 01:01:59.301678 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.301708 kubelet[3094]: W1101 01:01:59.301685 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.301708 kubelet[3094]: E1101 01:01:59.301691 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.301788 kubelet[3094]: E1101 01:01:59.301782 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.301816 kubelet[3094]: W1101 01:01:59.301789 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.301816 kubelet[3094]: E1101 01:01:59.301797 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.301902 kubelet[3094]: E1101 01:01:59.301895 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.301932 kubelet[3094]: W1101 01:01:59.301903 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.301932 kubelet[3094]: E1101 01:01:59.301911 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.302078 kubelet[3094]: E1101 01:01:59.302069 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.302101 kubelet[3094]: W1101 01:01:59.302079 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.302101 kubelet[3094]: E1101 01:01:59.302086 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.302196 kubelet[3094]: E1101 01:01:59.302190 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.302226 kubelet[3094]: W1101 01:01:59.302198 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.302226 kubelet[3094]: E1101 01:01:59.302204 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.302335 kubelet[3094]: E1101 01:01:59.302329 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.302360 kubelet[3094]: W1101 01:01:59.302337 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.302360 kubelet[3094]: E1101 01:01:59.302343 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.302463 kubelet[3094]: E1101 01:01:59.302457 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.302487 kubelet[3094]: W1101 01:01:59.302462 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.302487 kubelet[3094]: E1101 01:01:59.302468 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.302568 kubelet[3094]: E1101 01:01:59.302561 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.302591 kubelet[3094]: W1101 01:01:59.302568 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.302591 kubelet[3094]: E1101 01:01:59.302574 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.302692 kubelet[3094]: E1101 01:01:59.302686 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.302716 kubelet[3094]: W1101 01:01:59.302692 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.302716 kubelet[3094]: E1101 01:01:59.302698 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.302846 kubelet[3094]: E1101 01:01:59.302839 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.302873 kubelet[3094]: W1101 01:01:59.302847 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.302873 kubelet[3094]: E1101 01:01:59.302856 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.302969 kubelet[3094]: E1101 01:01:59.302963 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.302969 kubelet[3094]: W1101 01:01:59.302969 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.303014 kubelet[3094]: E1101 01:01:59.302975 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.303120 kubelet[3094]: E1101 01:01:59.303110 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.303141 kubelet[3094]: W1101 01:01:59.303121 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.303141 kubelet[3094]: E1101 01:01:59.303130 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.303241 kubelet[3094]: E1101 01:01:59.303233 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.303241 kubelet[3094]: W1101 01:01:59.303239 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.303289 kubelet[3094]: E1101 01:01:59.303245 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.303341 kubelet[3094]: E1101 01:01:59.303335 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.303363 kubelet[3094]: W1101 01:01:59.303341 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.303363 kubelet[3094]: E1101 01:01:59.303346 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.303459 kubelet[3094]: E1101 01:01:59.303454 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.303481 kubelet[3094]: W1101 01:01:59.303459 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.303481 kubelet[3094]: E1101 01:01:59.303465 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.303659 kubelet[3094]: E1101 01:01:59.303650 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.303683 kubelet[3094]: W1101 01:01:59.303661 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.303683 kubelet[3094]: E1101 01:01:59.303669 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.303772 kubelet[3094]: E1101 01:01:59.303766 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.303793 kubelet[3094]: W1101 01:01:59.303772 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.303793 kubelet[3094]: E1101 01:01:59.303778 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.303878 kubelet[3094]: E1101 01:01:59.303873 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.303900 kubelet[3094]: W1101 01:01:59.303880 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.303900 kubelet[3094]: E1101 01:01:59.303889 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.304078 kubelet[3094]: E1101 01:01:59.304072 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.304078 kubelet[3094]: W1101 01:01:59.304078 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.304131 kubelet[3094]: E1101 01:01:59.304084 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:01:59.308715 kubelet[3094]: E1101 01:01:59.308676 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:01:59.308715 kubelet[3094]: W1101 01:01:59.308686 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:01:59.308715 kubelet[3094]: E1101 01:01:59.308695 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:00.526332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3515857011.mount: Deactivated successfully. Nov 1 01:02:00.768510 kubelet[3094]: E1101 01:02:00.768489 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:02:00.852091 containerd[1834]: time="2025-11-01T01:02:00.852018806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:02:00.852312 containerd[1834]: time="2025-11-01T01:02:00.852211362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 01:02:00.852593 containerd[1834]: time="2025-11-01T01:02:00.852581417Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:02:00.853603 containerd[1834]: time="2025-11-01T01:02:00.853559996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:02:00.854019 containerd[1834]: time="2025-11-01T01:02:00.853981750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.730339331s" Nov 1 01:02:00.854019 containerd[1834]: time="2025-11-01T01:02:00.853998487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 01:02:00.854450 containerd[1834]: time="2025-11-01T01:02:00.854439435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 01:02:00.858093 containerd[1834]: time="2025-11-01T01:02:00.858076788Z" level=info msg="CreateContainer within sandbox \"39af1f1f945db0079edc3ebecd97f38c28692ef5a9a04e46158b0906898bdf4f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 01:02:00.862319 containerd[1834]: time="2025-11-01T01:02:00.862272234Z" level=info msg="CreateContainer within sandbox \"39af1f1f945db0079edc3ebecd97f38c28692ef5a9a04e46158b0906898bdf4f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b8c018776c7bd30f5ad5ae7e19c293b8d4c9b6f4797bdf3e81985831f19cea7f\"" Nov 1 01:02:00.862490 containerd[1834]: time="2025-11-01T01:02:00.862477104Z" level=info msg="StartContainer for \"b8c018776c7bd30f5ad5ae7e19c293b8d4c9b6f4797bdf3e81985831f19cea7f\"" Nov 1 01:02:00.881406 systemd[1]: Started cri-containerd-b8c018776c7bd30f5ad5ae7e19c293b8d4c9b6f4797bdf3e81985831f19cea7f.scope - libcontainer container b8c018776c7bd30f5ad5ae7e19c293b8d4c9b6f4797bdf3e81985831f19cea7f. Nov 1 01:02:00.908847 containerd[1834]: time="2025-11-01T01:02:00.908824209Z" level=info msg="StartContainer for \"b8c018776c7bd30f5ad5ae7e19c293b8d4c9b6f4797bdf3e81985831f19cea7f\" returns successfully" Nov 1 01:02:01.857945 kubelet[3094]: I1101 01:02:01.857830 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-94bd4856b-pbmq5" podStartSLOduration=2.126906808 podStartE2EDuration="3.857792057s" podCreationTimestamp="2025-11-01 01:01:58 +0000 UTC" firstStartedPulling="2025-11-01 01:01:59.123475278 +0000 UTC m=+18.403612055" lastFinishedPulling="2025-11-01 01:02:00.854360525 +0000 UTC m=+20.134497304" observedRunningTime="2025-11-01 01:02:01.857136251 +0000 UTC m=+21.137273130" watchObservedRunningTime="2025-11-01 01:02:01.857792057 +0000 UTC m=+21.137928885" Nov 1 01:02:01.906661 kubelet[3094]: E1101 01:02:01.906627 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.906661 kubelet[3094]: W1101 01:02:01.906638 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.906661 kubelet[3094]: E1101 01:02:01.906649 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.906847 kubelet[3094]: E1101 01:02:01.906810 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.906847 kubelet[3094]: W1101 01:02:01.906816 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.906847 kubelet[3094]: E1101 01:02:01.906822 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.906997 kubelet[3094]: E1101 01:02:01.906955 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.906997 kubelet[3094]: W1101 01:02:01.906961 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.906997 kubelet[3094]: E1101 01:02:01.906967 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.907100 kubelet[3094]: E1101 01:02:01.907093 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.907100 kubelet[3094]: W1101 01:02:01.907099 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.907152 kubelet[3094]: E1101 01:02:01.907104 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.907211 kubelet[3094]: E1101 01:02:01.907205 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.907211 kubelet[3094]: W1101 01:02:01.907210 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.907262 kubelet[3094]: E1101 01:02:01.907215 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.907339 kubelet[3094]: E1101 01:02:01.907304 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.907339 kubelet[3094]: W1101 01:02:01.907309 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.907339 kubelet[3094]: E1101 01:02:01.907314 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.907404 kubelet[3094]: E1101 01:02:01.907395 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.907404 kubelet[3094]: W1101 01:02:01.907399 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.907441 kubelet[3094]: E1101 01:02:01.907404 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.907515 kubelet[3094]: E1101 01:02:01.907484 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.907515 kubelet[3094]: W1101 01:02:01.907490 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.907515 kubelet[3094]: E1101 01:02:01.907495 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.907579 kubelet[3094]: E1101 01:02:01.907575 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.907600 kubelet[3094]: W1101 01:02:01.907580 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.907600 kubelet[3094]: E1101 01:02:01.907585 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.907668 kubelet[3094]: E1101 01:02:01.907662 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.907668 kubelet[3094]: W1101 01:02:01.907667 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.907710 kubelet[3094]: E1101 01:02:01.907672 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.907759 kubelet[3094]: E1101 01:02:01.907753 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.907784 kubelet[3094]: W1101 01:02:01.907759 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.907784 kubelet[3094]: E1101 01:02:01.907764 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.907848 kubelet[3094]: E1101 01:02:01.907842 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.907869 kubelet[3094]: W1101 01:02:01.907848 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.907869 kubelet[3094]: E1101 01:02:01.907853 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.907942 kubelet[3094]: E1101 01:02:01.907937 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.907965 kubelet[3094]: W1101 01:02:01.907942 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.907965 kubelet[3094]: E1101 01:02:01.907947 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.908030 kubelet[3094]: E1101 01:02:01.908025 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.908052 kubelet[3094]: W1101 01:02:01.908030 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.908052 kubelet[3094]: E1101 01:02:01.908035 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.908119 kubelet[3094]: E1101 01:02:01.908113 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.908140 kubelet[3094]: W1101 01:02:01.908119 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.908140 kubelet[3094]: E1101 01:02:01.908124 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.922656 kubelet[3094]: E1101 01:02:01.922623 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.922656 kubelet[3094]: W1101 01:02:01.922634 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.922656 kubelet[3094]: E1101 01:02:01.922644 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.922903 kubelet[3094]: E1101 01:02:01.922856 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.922903 kubelet[3094]: W1101 01:02:01.922867 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.922903 kubelet[3094]: E1101 01:02:01.922876 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.923104 kubelet[3094]: E1101 01:02:01.923064 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.923104 kubelet[3094]: W1101 01:02:01.923072 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.923104 kubelet[3094]: E1101 01:02:01.923080 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.923309 kubelet[3094]: E1101 01:02:01.923269 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.923309 kubelet[3094]: W1101 01:02:01.923281 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.923309 kubelet[3094]: E1101 01:02:01.923291 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.923490 kubelet[3094]: E1101 01:02:01.923455 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.923490 kubelet[3094]: W1101 01:02:01.923463 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.923490 kubelet[3094]: E1101 01:02:01.923471 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.923660 kubelet[3094]: E1101 01:02:01.923622 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.923660 kubelet[3094]: W1101 01:02:01.923630 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.923660 kubelet[3094]: E1101 01:02:01.923638 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.923799 kubelet[3094]: E1101 01:02:01.923791 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.923799 kubelet[3094]: W1101 01:02:01.923799 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.923854 kubelet[3094]: E1101 01:02:01.923806 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.924022 kubelet[3094]: E1101 01:02:01.924011 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.924022 kubelet[3094]: W1101 01:02:01.924022 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.924084 kubelet[3094]: E1101 01:02:01.924033 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.924206 kubelet[3094]: E1101 01:02:01.924199 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.924239 kubelet[3094]: W1101 01:02:01.924206 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.924239 kubelet[3094]: E1101 01:02:01.924214 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.924386 kubelet[3094]: E1101 01:02:01.924379 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.924386 kubelet[3094]: W1101 01:02:01.924386 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.924440 kubelet[3094]: E1101 01:02:01.924393 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.924532 kubelet[3094]: E1101 01:02:01.924524 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.924559 kubelet[3094]: W1101 01:02:01.924532 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.924559 kubelet[3094]: E1101 01:02:01.924539 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.924655 kubelet[3094]: E1101 01:02:01.924648 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.924684 kubelet[3094]: W1101 01:02:01.924655 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.924684 kubelet[3094]: E1101 01:02:01.924662 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.924802 kubelet[3094]: E1101 01:02:01.924795 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.924830 kubelet[3094]: W1101 01:02:01.924802 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.924830 kubelet[3094]: E1101 01:02:01.924809 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.924997 kubelet[3094]: E1101 01:02:01.924989 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.925027 kubelet[3094]: W1101 01:02:01.924998 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.925027 kubelet[3094]: E1101 01:02:01.925006 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.925135 kubelet[3094]: E1101 01:02:01.925127 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.925165 kubelet[3094]: W1101 01:02:01.925134 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.925165 kubelet[3094]: E1101 01:02:01.925141 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.925289 kubelet[3094]: E1101 01:02:01.925282 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.925319 kubelet[3094]: W1101 01:02:01.925289 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.925319 kubelet[3094]: E1101 01:02:01.925297 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.925520 kubelet[3094]: E1101 01:02:01.925513 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.925548 kubelet[3094]: W1101 01:02:01.925521 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.925548 kubelet[3094]: E1101 01:02:01.925529 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:01.925678 kubelet[3094]: E1101 01:02:01.925671 3094 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:02:01.925708 kubelet[3094]: W1101 01:02:01.925679 3094 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:02:01.925708 kubelet[3094]: E1101 01:02:01.925686 3094 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:02:02.165816 containerd[1834]: time="2025-11-01T01:02:02.165751448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:02:02.166073 containerd[1834]: time="2025-11-01T01:02:02.165947762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 01:02:02.166357 containerd[1834]: time="2025-11-01T01:02:02.166317908Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:02:02.167470 containerd[1834]: time="2025-11-01T01:02:02.167428766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:02:02.168607 containerd[1834]: time="2025-11-01T01:02:02.168561691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.314105078s" Nov 1 01:02:02.168607 containerd[1834]: time="2025-11-01T01:02:02.168581154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 01:02:02.170166 containerd[1834]: time="2025-11-01T01:02:02.170152859Z" level=info msg="CreateContainer within sandbox \"c3986b7fad64e0582f1975964dbcca2c5f85b13a40430f4851b31bdcbaad0c00\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 01:02:02.174980 containerd[1834]: time="2025-11-01T01:02:02.174937632Z" level=info msg="CreateContainer within sandbox \"c3986b7fad64e0582f1975964dbcca2c5f85b13a40430f4851b31bdcbaad0c00\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"12b8f7b4017580b06a13f6c4f739846d30e74e818345f1e354e8b27b9cbc90fc\"" Nov 1 01:02:02.176030 containerd[1834]: time="2025-11-01T01:02:02.175385144Z" level=info msg="StartContainer for \"12b8f7b4017580b06a13f6c4f739846d30e74e818345f1e354e8b27b9cbc90fc\"" Nov 1 01:02:02.201539 systemd[1]: Started cri-containerd-12b8f7b4017580b06a13f6c4f739846d30e74e818345f1e354e8b27b9cbc90fc.scope - libcontainer container 12b8f7b4017580b06a13f6c4f739846d30e74e818345f1e354e8b27b9cbc90fc. Nov 1 01:02:02.215349 containerd[1834]: time="2025-11-01T01:02:02.215295108Z" level=info msg="StartContainer for \"12b8f7b4017580b06a13f6c4f739846d30e74e818345f1e354e8b27b9cbc90fc\" returns successfully" Nov 1 01:02:02.221290 systemd[1]: cri-containerd-12b8f7b4017580b06a13f6c4f739846d30e74e818345f1e354e8b27b9cbc90fc.scope: Deactivated successfully. Nov 1 01:02:02.235234 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12b8f7b4017580b06a13f6c4f739846d30e74e818345f1e354e8b27b9cbc90fc-rootfs.mount: Deactivated successfully. Nov 1 01:02:02.665557 containerd[1834]: time="2025-11-01T01:02:02.665480029Z" level=info msg="shim disconnected" id=12b8f7b4017580b06a13f6c4f739846d30e74e818345f1e354e8b27b9cbc90fc namespace=k8s.io Nov 1 01:02:02.665557 containerd[1834]: time="2025-11-01T01:02:02.665516937Z" level=warning msg="cleaning up after shim disconnected" id=12b8f7b4017580b06a13f6c4f739846d30e74e818345f1e354e8b27b9cbc90fc namespace=k8s.io Nov 1 01:02:02.665557 containerd[1834]: time="2025-11-01T01:02:02.665523061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 01:02:02.769272 kubelet[3094]: E1101 01:02:02.769148 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:02:02.845981 containerd[1834]: time="2025-11-01T01:02:02.845874619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 01:02:04.768955 kubelet[3094]: E1101 01:02:04.768931 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:02:05.002986 containerd[1834]: time="2025-11-01T01:02:05.002936646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:02:05.003201 containerd[1834]: time="2025-11-01T01:02:05.003181791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 01:02:05.003571 containerd[1834]: time="2025-11-01T01:02:05.003542055Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:02:05.004563 containerd[1834]: time="2025-11-01T01:02:05.004524016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:02:05.004970 containerd[1834]: time="2025-11-01T01:02:05.004929861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.158963229s" Nov 1 01:02:05.004970 containerd[1834]: time="2025-11-01T01:02:05.004944181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 01:02:05.006340 containerd[1834]: time="2025-11-01T01:02:05.006328430Z" level=info msg="CreateContainer within sandbox \"c3986b7fad64e0582f1975964dbcca2c5f85b13a40430f4851b31bdcbaad0c00\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 01:02:05.011836 containerd[1834]: time="2025-11-01T01:02:05.011790121Z" level=info msg="CreateContainer within sandbox \"c3986b7fad64e0582f1975964dbcca2c5f85b13a40430f4851b31bdcbaad0c00\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8de41b4fdfc9fc51eb5aa1345c7e454bcec66857f9e1346e4d37e9af280b1aae\"" Nov 1 01:02:05.012011 containerd[1834]: time="2025-11-01T01:02:05.011997291Z" level=info msg="StartContainer for \"8de41b4fdfc9fc51eb5aa1345c7e454bcec66857f9e1346e4d37e9af280b1aae\"" Nov 1 01:02:05.050702 systemd[1]: Started cri-containerd-8de41b4fdfc9fc51eb5aa1345c7e454bcec66857f9e1346e4d37e9af280b1aae.scope - libcontainer container 8de41b4fdfc9fc51eb5aa1345c7e454bcec66857f9e1346e4d37e9af280b1aae. Nov 1 01:02:05.124590 containerd[1834]: time="2025-11-01T01:02:05.124524357Z" level=info msg="StartContainer for \"8de41b4fdfc9fc51eb5aa1345c7e454bcec66857f9e1346e4d37e9af280b1aae\" returns successfully" Nov 1 01:02:05.769538 containerd[1834]: time="2025-11-01T01:02:05.769402682Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:02:05.773981 systemd[1]: cri-containerd-8de41b4fdfc9fc51eb5aa1345c7e454bcec66857f9e1346e4d37e9af280b1aae.scope: Deactivated successfully. Nov 1 01:02:05.821700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8de41b4fdfc9fc51eb5aa1345c7e454bcec66857f9e1346e4d37e9af280b1aae-rootfs.mount: Deactivated successfully. Nov 1 01:02:05.853421 kubelet[3094]: I1101 01:02:05.853362 3094 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 01:02:05.994979 systemd[1]: Created slice kubepods-burstable-pod4655cfaf_6ee0_4366_982d_ab89e39053ab.slice - libcontainer container kubepods-burstable-pod4655cfaf_6ee0_4366_982d_ab89e39053ab.slice. Nov 1 01:02:06.029196 systemd[1]: Created slice kubepods-besteffort-pod66631db9_6f47_4e8c_8fde_e00b56c3ece6.slice - libcontainer container kubepods-besteffort-pod66631db9_6f47_4e8c_8fde_e00b56c3ece6.slice. Nov 1 01:02:06.050563 kubelet[3094]: I1101 01:02:06.050481 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j4bq\" (UniqueName: \"kubernetes.io/projected/4655cfaf-6ee0-4366-982d-ab89e39053ab-kube-api-access-6j4bq\") pod \"coredns-674b8bbfcf-mzcbx\" (UID: \"4655cfaf-6ee0-4366-982d-ab89e39053ab\") " pod="kube-system/coredns-674b8bbfcf-mzcbx" Nov 1 01:02:06.051081 kubelet[3094]: I1101 01:02:06.050593 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66631db9-6f47-4e8c-8fde-e00b56c3ece6-tigera-ca-bundle\") pod \"calico-kube-controllers-59cbdf9dd7-589d7\" (UID: \"66631db9-6f47-4e8c-8fde-e00b56c3ece6\") " pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" Nov 1 01:02:06.051081 kubelet[3094]: I1101 01:02:06.050653 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pvl4\" (UniqueName: \"kubernetes.io/projected/66631db9-6f47-4e8c-8fde-e00b56c3ece6-kube-api-access-8pvl4\") pod \"calico-kube-controllers-59cbdf9dd7-589d7\" (UID: \"66631db9-6f47-4e8c-8fde-e00b56c3ece6\") " pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" Nov 1 01:02:06.051081 kubelet[3094]: I1101 01:02:06.050725 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4655cfaf-6ee0-4366-982d-ab89e39053ab-config-volume\") pod \"coredns-674b8bbfcf-mzcbx\" (UID: \"4655cfaf-6ee0-4366-982d-ab89e39053ab\") " pod="kube-system/coredns-674b8bbfcf-mzcbx" Nov 1 01:02:06.063875 systemd[1]: Created slice kubepods-burstable-pod204dfdb6_7331_4950_b1cc_50b35a251b49.slice - libcontainer container kubepods-burstable-pod204dfdb6_7331_4950_b1cc_50b35a251b49.slice. Nov 1 01:02:06.137691 systemd[1]: Created slice kubepods-besteffort-pod61ef21f2_b413_4ed0_8572_35cbc407e679.slice - libcontainer container kubepods-besteffort-pod61ef21f2_b413_4ed0_8572_35cbc407e679.slice. Nov 1 01:02:06.152017 kubelet[3094]: I1101 01:02:06.151903 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fml8\" (UniqueName: \"kubernetes.io/projected/204dfdb6-7331-4950-b1cc-50b35a251b49-kube-api-access-7fml8\") pod \"coredns-674b8bbfcf-b6jr8\" (UID: \"204dfdb6-7331-4950-b1cc-50b35a251b49\") " pod="kube-system/coredns-674b8bbfcf-b6jr8" Nov 1 01:02:06.194072 kubelet[3094]: I1101 01:02:06.152052 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/61ef21f2-b413-4ed0-8572-35cbc407e679-calico-apiserver-certs\") pod \"calico-apiserver-65b64f6597-jkbn2\" (UID: \"61ef21f2-b413-4ed0-8572-35cbc407e679\") " pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" Nov 1 01:02:06.194072 kubelet[3094]: I1101 01:02:06.152113 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9628\" (UniqueName: \"kubernetes.io/projected/61ef21f2-b413-4ed0-8572-35cbc407e679-kube-api-access-n9628\") pod \"calico-apiserver-65b64f6597-jkbn2\" (UID: \"61ef21f2-b413-4ed0-8572-35cbc407e679\") " pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" Nov 1 01:02:06.194072 kubelet[3094]: I1101 01:02:06.152429 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/204dfdb6-7331-4950-b1cc-50b35a251b49-config-volume\") pod \"coredns-674b8bbfcf-b6jr8\" (UID: \"204dfdb6-7331-4950-b1cc-50b35a251b49\") " pod="kube-system/coredns-674b8bbfcf-b6jr8" Nov 1 01:02:06.219986 containerd[1834]: time="2025-11-01T01:02:06.219943194Z" level=info msg="shim disconnected" id=8de41b4fdfc9fc51eb5aa1345c7e454bcec66857f9e1346e4d37e9af280b1aae namespace=k8s.io Nov 1 01:02:06.219986 containerd[1834]: time="2025-11-01T01:02:06.219985467Z" level=warning msg="cleaning up after shim disconnected" id=8de41b4fdfc9fc51eb5aa1345c7e454bcec66857f9e1346e4d37e9af280b1aae namespace=k8s.io Nov 1 01:02:06.220196 containerd[1834]: time="2025-11-01T01:02:06.219995645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 01:02:06.222379 systemd[1]: Created slice kubepods-besteffort-podf10f5e42_eb9a_47ab_8781_8e9dfee85efa.slice - libcontainer container kubepods-besteffort-podf10f5e42_eb9a_47ab_8781_8e9dfee85efa.slice. Nov 1 01:02:06.224856 systemd[1]: Created slice kubepods-besteffort-pod2848d47a_0e6d_4163_bcbd_cf745e94e4c6.slice - libcontainer container kubepods-besteffort-pod2848d47a_0e6d_4163_bcbd_cf745e94e4c6.slice. Nov 1 01:02:06.227521 systemd[1]: Created slice kubepods-besteffort-pod91a86707_3121_4bc7_8e17_521b669e030d.slice - libcontainer container kubepods-besteffort-pod91a86707_3121_4bc7_8e17_521b669e030d.slice. Nov 1 01:02:06.252682 kubelet[3094]: I1101 01:02:06.252623 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f10f5e42-eb9a-47ab-8781-8e9dfee85efa-config\") pod \"goldmane-666569f655-4gtnc\" (UID: \"f10f5e42-eb9a-47ab-8781-8e9dfee85efa\") " pod="calico-system/goldmane-666569f655-4gtnc" Nov 1 01:02:06.252682 kubelet[3094]: I1101 01:02:06.252654 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/91a86707-3121-4bc7-8e17-521b669e030d-whisker-backend-key-pair\") pod \"whisker-7596bf7846-q9gz4\" (UID: \"91a86707-3121-4bc7-8e17-521b669e030d\") " pod="calico-system/whisker-7596bf7846-q9gz4" Nov 1 01:02:06.252682 kubelet[3094]: I1101 01:02:06.252672 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg5wk\" (UniqueName: \"kubernetes.io/projected/91a86707-3121-4bc7-8e17-521b669e030d-kube-api-access-jg5wk\") pod \"whisker-7596bf7846-q9gz4\" (UID: \"91a86707-3121-4bc7-8e17-521b669e030d\") " pod="calico-system/whisker-7596bf7846-q9gz4" Nov 1 01:02:06.252682 kubelet[3094]: I1101 01:02:06.252686 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfxv6\" (UniqueName: \"kubernetes.io/projected/2848d47a-0e6d-4163-bcbd-cf745e94e4c6-kube-api-access-bfxv6\") pod \"calico-apiserver-65b64f6597-6gf7n\" (UID: \"2848d47a-0e6d-4163-bcbd-cf745e94e4c6\") " pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" Nov 1 01:02:06.252868 kubelet[3094]: I1101 01:02:06.252715 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f10f5e42-eb9a-47ab-8781-8e9dfee85efa-goldmane-ca-bundle\") pod \"goldmane-666569f655-4gtnc\" (UID: \"f10f5e42-eb9a-47ab-8781-8e9dfee85efa\") " pod="calico-system/goldmane-666569f655-4gtnc" Nov 1 01:02:06.252868 kubelet[3094]: I1101 01:02:06.252730 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f10f5e42-eb9a-47ab-8781-8e9dfee85efa-goldmane-key-pair\") pod \"goldmane-666569f655-4gtnc\" (UID: \"f10f5e42-eb9a-47ab-8781-8e9dfee85efa\") " pod="calico-system/goldmane-666569f655-4gtnc" Nov 1 01:02:06.252868 kubelet[3094]: I1101 01:02:06.252744 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqkcv\" (UniqueName: \"kubernetes.io/projected/f10f5e42-eb9a-47ab-8781-8e9dfee85efa-kube-api-access-cqkcv\") pod \"goldmane-666569f655-4gtnc\" (UID: \"f10f5e42-eb9a-47ab-8781-8e9dfee85efa\") " pod="calico-system/goldmane-666569f655-4gtnc" Nov 1 01:02:06.252868 kubelet[3094]: I1101 01:02:06.252760 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91a86707-3121-4bc7-8e17-521b669e030d-whisker-ca-bundle\") pod \"whisker-7596bf7846-q9gz4\" (UID: \"91a86707-3121-4bc7-8e17-521b669e030d\") " pod="calico-system/whisker-7596bf7846-q9gz4" Nov 1 01:02:06.252868 kubelet[3094]: I1101 01:02:06.252776 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2848d47a-0e6d-4163-bcbd-cf745e94e4c6-calico-apiserver-certs\") pod \"calico-apiserver-65b64f6597-6gf7n\" (UID: \"2848d47a-0e6d-4163-bcbd-cf745e94e4c6\") " pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" Nov 1 01:02:06.312292 containerd[1834]: time="2025-11-01T01:02:06.312091537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mzcbx,Uid:4655cfaf-6ee0-4366-982d-ab89e39053ab,Namespace:kube-system,Attempt:0,}" Nov 1 01:02:06.331606 containerd[1834]: time="2025-11-01T01:02:06.331558318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59cbdf9dd7-589d7,Uid:66631db9-6f47-4e8c-8fde-e00b56c3ece6,Namespace:calico-system,Attempt:0,}" Nov 1 01:02:06.341444 containerd[1834]: time="2025-11-01T01:02:06.341414103Z" level=error msg="Failed to destroy network for sandbox \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.341635 containerd[1834]: time="2025-11-01T01:02:06.341620250Z" level=error msg="encountered an error cleaning up failed sandbox \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.341675 containerd[1834]: time="2025-11-01T01:02:06.341653986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mzcbx,Uid:4655cfaf-6ee0-4366-982d-ab89e39053ab,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.341840 kubelet[3094]: E1101 01:02:06.341788 3094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.341875 kubelet[3094]: E1101 01:02:06.341838 3094 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mzcbx" Nov 1 01:02:06.341875 kubelet[3094]: E1101 01:02:06.341852 3094 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mzcbx" Nov 1 01:02:06.341917 kubelet[3094]: E1101 01:02:06.341883 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mzcbx_kube-system(4655cfaf-6ee0-4366-982d-ab89e39053ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mzcbx_kube-system(4655cfaf-6ee0-4366-982d-ab89e39053ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mzcbx" podUID="4655cfaf-6ee0-4366-982d-ab89e39053ab" Nov 1 01:02:06.357264 containerd[1834]: time="2025-11-01T01:02:06.357234191Z" level=error msg="Failed to destroy network for sandbox \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.357439 containerd[1834]: time="2025-11-01T01:02:06.357423625Z" level=error msg="encountered an error cleaning up failed sandbox \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.357484 containerd[1834]: time="2025-11-01T01:02:06.357461881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59cbdf9dd7-589d7,Uid:66631db9-6f47-4e8c-8fde-e00b56c3ece6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.357582 kubelet[3094]: E1101 01:02:06.357564 3094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.357614 kubelet[3094]: E1101 01:02:06.357594 3094 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" Nov 1 01:02:06.357614 kubelet[3094]: E1101 01:02:06.357607 3094 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" Nov 1 01:02:06.357673 kubelet[3094]: E1101 01:02:06.357657 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59cbdf9dd7-589d7_calico-system(66631db9-6f47-4e8c-8fde-e00b56c3ece6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59cbdf9dd7-589d7_calico-system(66631db9-6f47-4e8c-8fde-e00b56c3ece6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:02:06.386737 containerd[1834]: time="2025-11-01T01:02:06.386607999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b6jr8,Uid:204dfdb6-7331-4950-b1cc-50b35a251b49,Namespace:kube-system,Attempt:0,}" Nov 1 01:02:06.414609 containerd[1834]: time="2025-11-01T01:02:06.414552938Z" level=error msg="Failed to destroy network for sandbox \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.414774 containerd[1834]: time="2025-11-01T01:02:06.414734493Z" level=error msg="encountered an error cleaning up failed sandbox \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.414774 containerd[1834]: time="2025-11-01T01:02:06.414764327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b6jr8,Uid:204dfdb6-7331-4950-b1cc-50b35a251b49,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.414949 kubelet[3094]: E1101 01:02:06.414925 3094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.414981 kubelet[3094]: E1101 01:02:06.414968 3094 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b6jr8" Nov 1 01:02:06.415010 kubelet[3094]: E1101 01:02:06.414983 3094 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b6jr8" Nov 1 01:02:06.415042 kubelet[3094]: E1101 01:02:06.415026 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-b6jr8_kube-system(204dfdb6-7331-4950-b1cc-50b35a251b49)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-b6jr8_kube-system(204dfdb6-7331-4950-b1cc-50b35a251b49)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b6jr8" podUID="204dfdb6-7331-4950-b1cc-50b35a251b49" Nov 1 01:02:06.444553 containerd[1834]: time="2025-11-01T01:02:06.444428955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b64f6597-jkbn2,Uid:61ef21f2-b413-4ed0-8572-35cbc407e679,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:02:06.472183 containerd[1834]: time="2025-11-01T01:02:06.472152528Z" level=error msg="Failed to destroy network for sandbox \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.472372 containerd[1834]: time="2025-11-01T01:02:06.472335069Z" level=error msg="encountered an error cleaning up failed sandbox \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.472372 containerd[1834]: time="2025-11-01T01:02:06.472362850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b64f6597-jkbn2,Uid:61ef21f2-b413-4ed0-8572-35cbc407e679,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.472524 kubelet[3094]: E1101 01:02:06.472478 3094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.472524 kubelet[3094]: E1101 01:02:06.472516 3094 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" Nov 1 01:02:06.472572 kubelet[3094]: E1101 01:02:06.472530 3094 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" Nov 1 01:02:06.472595 kubelet[3094]: E1101 01:02:06.472566 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65b64f6597-jkbn2_calico-apiserver(61ef21f2-b413-4ed0-8572-35cbc407e679)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65b64f6597-jkbn2_calico-apiserver(61ef21f2-b413-4ed0-8572-35cbc407e679)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:02:06.525839 containerd[1834]: time="2025-11-01T01:02:06.525719641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4gtnc,Uid:f10f5e42-eb9a-47ab-8781-8e9dfee85efa,Namespace:calico-system,Attempt:0,}" Nov 1 01:02:06.527241 containerd[1834]: time="2025-11-01T01:02:06.527196201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b64f6597-6gf7n,Uid:2848d47a-0e6d-4163-bcbd-cf745e94e4c6,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:02:06.529559 containerd[1834]: time="2025-11-01T01:02:06.529512914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7596bf7846-q9gz4,Uid:91a86707-3121-4bc7-8e17-521b669e030d,Namespace:calico-system,Attempt:0,}" Nov 1 01:02:06.555643 containerd[1834]: time="2025-11-01T01:02:06.555608849Z" level=error msg="Failed to destroy network for sandbox \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.555863 containerd[1834]: time="2025-11-01T01:02:06.555844224Z" level=error msg="encountered an error cleaning up failed sandbox \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.555900 containerd[1834]: time="2025-11-01T01:02:06.555885594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4gtnc,Uid:f10f5e42-eb9a-47ab-8781-8e9dfee85efa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.556027 kubelet[3094]: E1101 01:02:06.556006 3094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.556057 kubelet[3094]: E1101 01:02:06.556044 3094 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4gtnc" Nov 1 01:02:06.556078 kubelet[3094]: E1101 01:02:06.556059 3094 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4gtnc" Nov 1 01:02:06.556105 kubelet[3094]: E1101 01:02:06.556090 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-4gtnc_calico-system(f10f5e42-eb9a-47ab-8781-8e9dfee85efa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-4gtnc_calico-system(f10f5e42-eb9a-47ab-8781-8e9dfee85efa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:02:06.556151 containerd[1834]: time="2025-11-01T01:02:06.556137380Z" level=error msg="Failed to destroy network for sandbox \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.556298 containerd[1834]: time="2025-11-01T01:02:06.556286499Z" level=error msg="encountered an error cleaning up failed sandbox \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.556325 containerd[1834]: time="2025-11-01T01:02:06.556313468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b64f6597-6gf7n,Uid:2848d47a-0e6d-4163-bcbd-cf745e94e4c6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.556396 kubelet[3094]: E1101 01:02:06.556383 3094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.556418 kubelet[3094]: E1101 01:02:06.556407 3094 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" Nov 1 01:02:06.556436 kubelet[3094]: E1101 01:02:06.556418 3094 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" Nov 1 01:02:06.556455 kubelet[3094]: E1101 01:02:06.556443 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65b64f6597-6gf7n_calico-apiserver(2848d47a-0e6d-4163-bcbd-cf745e94e4c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65b64f6597-6gf7n_calico-apiserver(2848d47a-0e6d-4163-bcbd-cf745e94e4c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:02:06.557205 containerd[1834]: time="2025-11-01T01:02:06.557191651Z" level=error msg="Failed to destroy network for sandbox \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.557417 containerd[1834]: time="2025-11-01T01:02:06.557380411Z" level=error msg="encountered an error cleaning up failed sandbox \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.557417 containerd[1834]: time="2025-11-01T01:02:06.557402232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7596bf7846-q9gz4,Uid:91a86707-3121-4bc7-8e17-521b669e030d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.557474 kubelet[3094]: E1101 01:02:06.557460 3094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.557497 kubelet[3094]: E1101 01:02:06.557482 3094 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7596bf7846-q9gz4" Nov 1 01:02:06.557519 kubelet[3094]: E1101 01:02:06.557493 3094 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7596bf7846-q9gz4" Nov 1 01:02:06.557539 kubelet[3094]: E1101 01:02:06.557516 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7596bf7846-q9gz4_calico-system(91a86707-3121-4bc7-8e17-521b669e030d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7596bf7846-q9gz4_calico-system(91a86707-3121-4bc7-8e17-521b669e030d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7596bf7846-q9gz4" podUID="91a86707-3121-4bc7-8e17-521b669e030d" Nov 1 01:02:06.784712 systemd[1]: Created slice kubepods-besteffort-pod1c2067e6_df38_44d0_9df8_192be51b26fc.slice - libcontainer container kubepods-besteffort-pod1c2067e6_df38_44d0_9df8_192be51b26fc.slice. Nov 1 01:02:06.790244 containerd[1834]: time="2025-11-01T01:02:06.790097627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vnfp,Uid:1c2067e6-df38-44d0-9df8-192be51b26fc,Namespace:calico-system,Attempt:0,}" Nov 1 01:02:06.824554 containerd[1834]: time="2025-11-01T01:02:06.824500786Z" level=error msg="Failed to destroy network for sandbox \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.824708 containerd[1834]: time="2025-11-01T01:02:06.824665883Z" level=error msg="encountered an error cleaning up failed sandbox \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.824708 containerd[1834]: time="2025-11-01T01:02:06.824695871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vnfp,Uid:1c2067e6-df38-44d0-9df8-192be51b26fc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.824881 kubelet[3094]: E1101 01:02:06.824828 3094 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.824881 kubelet[3094]: E1101 01:02:06.824862 3094 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9vnfp" Nov 1 01:02:06.824881 kubelet[3094]: E1101 01:02:06.824875 3094 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9vnfp" Nov 1 01:02:06.824960 kubelet[3094]: E1101 01:02:06.824908 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9vnfp_calico-system(1c2067e6-df38-44d0-9df8-192be51b26fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9vnfp_calico-system(1c2067e6-df38-44d0-9df8-192be51b26fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:02:06.857433 kubelet[3094]: I1101 01:02:06.857379 3094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Nov 1 01:02:06.857824 containerd[1834]: time="2025-11-01T01:02:06.857786096Z" level=info msg="StopPodSandbox for \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\"" Nov 1 01:02:06.857985 kubelet[3094]: I1101 01:02:06.857933 3094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Nov 1 01:02:06.858038 containerd[1834]: time="2025-11-01T01:02:06.857972350Z" level=info msg="Ensure that sandbox 9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a in task-service has been cleanup successfully" Nov 1 01:02:06.858253 containerd[1834]: time="2025-11-01T01:02:06.858236918Z" level=info msg="StopPodSandbox for \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\"" Nov 1 01:02:06.858418 containerd[1834]: time="2025-11-01T01:02:06.858404246Z" level=info msg="Ensure that sandbox 14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a in task-service has been cleanup successfully" Nov 1 01:02:06.858508 kubelet[3094]: I1101 01:02:06.858498 3094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Nov 1 01:02:06.858879 containerd[1834]: time="2025-11-01T01:02:06.858859382Z" level=info msg="StopPodSandbox for \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\"" Nov 1 01:02:06.859249 containerd[1834]: time="2025-11-01T01:02:06.859213236Z" level=info msg="Ensure that sandbox 8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5 in task-service has been cleanup successfully" Nov 1 01:02:06.859503 kubelet[3094]: I1101 01:02:06.859483 3094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Nov 1 01:02:06.860059 containerd[1834]: time="2025-11-01T01:02:06.860035254Z" level=info msg="StopPodSandbox for \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\"" Nov 1 01:02:06.860349 containerd[1834]: time="2025-11-01T01:02:06.860326148Z" level=info msg="Ensure that sandbox 7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8 in task-service has been cleanup successfully" Nov 1 01:02:06.861374 kubelet[3094]: I1101 01:02:06.861349 3094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Nov 1 01:02:06.861885 containerd[1834]: time="2025-11-01T01:02:06.861854977Z" level=info msg="StopPodSandbox for \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\"" Nov 1 01:02:06.862075 containerd[1834]: time="2025-11-01T01:02:06.862055204Z" level=info msg="Ensure that sandbox 425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26 in task-service has been cleanup successfully" Nov 1 01:02:06.862138 kubelet[3094]: I1101 01:02:06.862122 3094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Nov 1 01:02:06.862567 containerd[1834]: time="2025-11-01T01:02:06.862539038Z" level=info msg="StopPodSandbox for \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\"" Nov 1 01:02:06.862852 containerd[1834]: time="2025-11-01T01:02:06.862705230Z" level=info msg="Ensure that sandbox 2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7 in task-service has been cleanup successfully" Nov 1 01:02:06.862926 kubelet[3094]: I1101 01:02:06.862911 3094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Nov 1 01:02:06.863581 containerd[1834]: time="2025-11-01T01:02:06.863542100Z" level=info msg="StopPodSandbox for \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\"" Nov 1 01:02:06.863808 containerd[1834]: time="2025-11-01T01:02:06.863784814Z" level=info msg="Ensure that sandbox 122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01 in task-service has been cleanup successfully" Nov 1 01:02:06.865851 containerd[1834]: time="2025-11-01T01:02:06.865810901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 01:02:06.866018 kubelet[3094]: I1101 01:02:06.865942 3094 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Nov 1 01:02:06.866454 containerd[1834]: time="2025-11-01T01:02:06.866430032Z" level=info msg="StopPodSandbox for \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\"" Nov 1 01:02:06.866620 containerd[1834]: time="2025-11-01T01:02:06.866600606Z" level=info msg="Ensure that sandbox 4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6 in task-service has been cleanup successfully" Nov 1 01:02:06.880808 containerd[1834]: time="2025-11-01T01:02:06.880773023Z" level=error msg="StopPodSandbox for \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\" failed" error="failed to destroy network for sandbox \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.880969 containerd[1834]: time="2025-11-01T01:02:06.880936643Z" level=error msg="StopPodSandbox for \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\" failed" error="failed to destroy network for sandbox \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.881043 kubelet[3094]: E1101 01:02:06.880949 3094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Nov 1 01:02:06.881043 kubelet[3094]: E1101 01:02:06.880994 3094 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a"} Nov 1 01:02:06.881153 kubelet[3094]: E1101 01:02:06.881057 3094 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91a86707-3121-4bc7-8e17-521b669e030d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:02:06.881153 kubelet[3094]: E1101 01:02:06.881086 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91a86707-3121-4bc7-8e17-521b669e030d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7596bf7846-q9gz4" podUID="91a86707-3121-4bc7-8e17-521b669e030d" Nov 1 01:02:06.881153 kubelet[3094]: E1101 01:02:06.881049 3094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Nov 1 01:02:06.881153 kubelet[3094]: E1101 01:02:06.881124 3094 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a"} Nov 1 01:02:06.881407 kubelet[3094]: E1101 01:02:06.881149 3094 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2848d47a-0e6d-4163-bcbd-cf745e94e4c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:02:06.881407 kubelet[3094]: E1101 01:02:06.881171 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2848d47a-0e6d-4163-bcbd-cf745e94e4c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:02:06.882020 containerd[1834]: time="2025-11-01T01:02:06.881987754Z" level=error msg="StopPodSandbox for \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\" failed" error="failed to destroy network for sandbox \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.882134 kubelet[3094]: E1101 01:02:06.882114 3094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Nov 1 01:02:06.882188 kubelet[3094]: E1101 01:02:06.882140 3094 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5"} Nov 1 01:02:06.882188 kubelet[3094]: E1101 01:02:06.882166 3094 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"61ef21f2-b413-4ed0-8572-35cbc407e679\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:02:06.882304 kubelet[3094]: E1101 01:02:06.882181 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"61ef21f2-b413-4ed0-8572-35cbc407e679\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:02:06.882603 containerd[1834]: time="2025-11-01T01:02:06.882570918Z" level=error msg="StopPodSandbox for \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\" failed" error="failed to destroy network for sandbox \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.882693 kubelet[3094]: E1101 01:02:06.882675 3094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Nov 1 01:02:06.882748 kubelet[3094]: E1101 01:02:06.882697 3094 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8"} Nov 1 01:02:06.882748 kubelet[3094]: E1101 01:02:06.882719 3094 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66631db9-6f47-4e8c-8fde-e00b56c3ece6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:02:06.882748 kubelet[3094]: E1101 01:02:06.882733 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66631db9-6f47-4e8c-8fde-e00b56c3ece6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:02:06.883453 containerd[1834]: time="2025-11-01T01:02:06.883433231Z" level=error msg="StopPodSandbox for \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\" failed" error="failed to destroy network for sandbox \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.883534 kubelet[3094]: E1101 01:02:06.883513 3094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Nov 1 01:02:06.883568 kubelet[3094]: E1101 01:02:06.883538 3094 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26"} Nov 1 01:02:06.883568 kubelet[3094]: E1101 01:02:06.883556 3094 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f10f5e42-eb9a-47ab-8781-8e9dfee85efa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:02:06.883631 kubelet[3094]: E1101 01:02:06.883571 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f10f5e42-eb9a-47ab-8781-8e9dfee85efa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:02:06.883810 containerd[1834]: time="2025-11-01T01:02:06.883781676Z" level=error msg="StopPodSandbox for \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\" failed" error="failed to destroy network for sandbox \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.883890 kubelet[3094]: E1101 01:02:06.883876 3094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Nov 1 01:02:06.883924 kubelet[3094]: E1101 01:02:06.883895 3094 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7"} Nov 1 01:02:06.883924 kubelet[3094]: E1101 01:02:06.883915 3094 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"204dfdb6-7331-4950-b1cc-50b35a251b49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:02:06.883984 kubelet[3094]: E1101 01:02:06.883928 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"204dfdb6-7331-4950-b1cc-50b35a251b49\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b6jr8" podUID="204dfdb6-7331-4950-b1cc-50b35a251b49" Nov 1 01:02:06.884683 containerd[1834]: time="2025-11-01T01:02:06.884641964Z" level=error msg="StopPodSandbox for \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\" failed" error="failed to destroy network for sandbox \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.884775 kubelet[3094]: E1101 01:02:06.884733 3094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Nov 1 01:02:06.884775 kubelet[3094]: E1101 01:02:06.884754 3094 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01"} Nov 1 01:02:06.884775 kubelet[3094]: E1101 01:02:06.884770 3094 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4655cfaf-6ee0-4366-982d-ab89e39053ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:02:06.884865 kubelet[3094]: E1101 01:02:06.884786 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4655cfaf-6ee0-4366-982d-ab89e39053ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mzcbx" podUID="4655cfaf-6ee0-4366-982d-ab89e39053ab" Nov 1 01:02:06.888018 containerd[1834]: time="2025-11-01T01:02:06.887962114Z" level=error msg="StopPodSandbox for \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\" failed" error="failed to destroy network for sandbox \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:02:06.888084 kubelet[3094]: E1101 01:02:06.888066 3094 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Nov 1 01:02:06.888119 kubelet[3094]: E1101 01:02:06.888089 3094 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6"} Nov 1 01:02:06.888119 kubelet[3094]: E1101 01:02:06.888109 3094 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1c2067e6-df38-44d0-9df8-192be51b26fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:02:06.888188 kubelet[3094]: E1101 01:02:06.888125 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1c2067e6-df38-44d0-9df8-192be51b26fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:02:07.223195 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01-shm.mount: Deactivated successfully. Nov 1 01:02:10.076802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount637883989.mount: Deactivated successfully. Nov 1 01:02:10.093761 containerd[1834]: time="2025-11-01T01:02:10.093711887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:02:10.093973 containerd[1834]: time="2025-11-01T01:02:10.093922275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 01:02:10.094273 containerd[1834]: time="2025-11-01T01:02:10.094244534Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:02:10.095202 containerd[1834]: time="2025-11-01T01:02:10.095157607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:02:10.095554 containerd[1834]: time="2025-11-01T01:02:10.095512761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 3.229660368s" Nov 1 01:02:10.095554 containerd[1834]: time="2025-11-01T01:02:10.095527588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 01:02:10.099384 containerd[1834]: time="2025-11-01T01:02:10.099368808Z" level=info msg="CreateContainer within sandbox \"c3986b7fad64e0582f1975964dbcca2c5f85b13a40430f4851b31bdcbaad0c00\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 01:02:10.104667 containerd[1834]: time="2025-11-01T01:02:10.104647236Z" level=info msg="CreateContainer within sandbox \"c3986b7fad64e0582f1975964dbcca2c5f85b13a40430f4851b31bdcbaad0c00\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d9452a9544a2746c86a9e6dd11c506e5d01d05ad6583eceb8d6228545373b760\"" Nov 1 01:02:10.104998 containerd[1834]: time="2025-11-01T01:02:10.104980709Z" level=info msg="StartContainer for \"d9452a9544a2746c86a9e6dd11c506e5d01d05ad6583eceb8d6228545373b760\"" Nov 1 01:02:10.132505 systemd[1]: Started cri-containerd-d9452a9544a2746c86a9e6dd11c506e5d01d05ad6583eceb8d6228545373b760.scope - libcontainer container d9452a9544a2746c86a9e6dd11c506e5d01d05ad6583eceb8d6228545373b760. Nov 1 01:02:10.150827 containerd[1834]: time="2025-11-01T01:02:10.150795464Z" level=info msg="StartContainer for \"d9452a9544a2746c86a9e6dd11c506e5d01d05ad6583eceb8d6228545373b760\" returns successfully" Nov 1 01:02:10.238396 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 01:02:10.238449 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 01:02:10.274754 containerd[1834]: time="2025-11-01T01:02:10.274727575Z" level=info msg="StopPodSandbox for \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\"" Nov 1 01:02:10.315802 containerd[1834]: 2025-11-01 01:02:10.298 [INFO][4686] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Nov 1 01:02:10.315802 containerd[1834]: 2025-11-01 01:02:10.299 [INFO][4686] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" iface="eth0" netns="/var/run/netns/cni-4e7af391-8d35-a038-aebb-7edb9ee35571" Nov 1 01:02:10.315802 containerd[1834]: 2025-11-01 01:02:10.299 [INFO][4686] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" iface="eth0" netns="/var/run/netns/cni-4e7af391-8d35-a038-aebb-7edb9ee35571" Nov 1 01:02:10.315802 containerd[1834]: 2025-11-01 01:02:10.299 [INFO][4686] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" iface="eth0" netns="/var/run/netns/cni-4e7af391-8d35-a038-aebb-7edb9ee35571" Nov 1 01:02:10.315802 containerd[1834]: 2025-11-01 01:02:10.299 [INFO][4686] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Nov 1 01:02:10.315802 containerd[1834]: 2025-11-01 01:02:10.299 [INFO][4686] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Nov 1 01:02:10.315802 containerd[1834]: 2025-11-01 01:02:10.309 [INFO][4714] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" HandleID="k8s-pod-network.14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-whisker--7596bf7846--q9gz4-eth0" Nov 1 01:02:10.315802 containerd[1834]: 2025-11-01 01:02:10.309 [INFO][4714] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:10.315802 containerd[1834]: 2025-11-01 01:02:10.309 [INFO][4714] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:10.315802 containerd[1834]: 2025-11-01 01:02:10.312 [WARNING][4714] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" HandleID="k8s-pod-network.14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-whisker--7596bf7846--q9gz4-eth0" Nov 1 01:02:10.315802 containerd[1834]: 2025-11-01 01:02:10.312 [INFO][4714] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" HandleID="k8s-pod-network.14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-whisker--7596bf7846--q9gz4-eth0" Nov 1 01:02:10.315802 containerd[1834]: 2025-11-01 01:02:10.313 [INFO][4714] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:10.315802 containerd[1834]: 2025-11-01 01:02:10.314 [INFO][4686] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Nov 1 01:02:10.316143 containerd[1834]: time="2025-11-01T01:02:10.315892062Z" level=info msg="TearDown network for sandbox \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\" successfully" Nov 1 01:02:10.316143 containerd[1834]: time="2025-11-01T01:02:10.315917879Z" level=info msg="StopPodSandbox for \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\" returns successfully" Nov 1 01:02:10.380461 kubelet[3094]: I1101 01:02:10.380361 3094 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91a86707-3121-4bc7-8e17-521b669e030d-whisker-ca-bundle\") pod \"91a86707-3121-4bc7-8e17-521b669e030d\" (UID: \"91a86707-3121-4bc7-8e17-521b669e030d\") " Nov 1 01:02:10.380461 kubelet[3094]: I1101 01:02:10.380398 3094 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg5wk\" (UniqueName: \"kubernetes.io/projected/91a86707-3121-4bc7-8e17-521b669e030d-kube-api-access-jg5wk\") pod \"91a86707-3121-4bc7-8e17-521b669e030d\" (UID: \"91a86707-3121-4bc7-8e17-521b669e030d\") " Nov 1 01:02:10.380461 kubelet[3094]: I1101 01:02:10.380420 3094 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/91a86707-3121-4bc7-8e17-521b669e030d-whisker-backend-key-pair\") pod \"91a86707-3121-4bc7-8e17-521b669e030d\" (UID: \"91a86707-3121-4bc7-8e17-521b669e030d\") " Nov 1 01:02:10.380782 kubelet[3094]: I1101 01:02:10.380721 3094 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91a86707-3121-4bc7-8e17-521b669e030d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "91a86707-3121-4bc7-8e17-521b669e030d" (UID: "91a86707-3121-4bc7-8e17-521b669e030d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 01:02:10.382727 kubelet[3094]: I1101 01:02:10.382666 3094 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a86707-3121-4bc7-8e17-521b669e030d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "91a86707-3121-4bc7-8e17-521b669e030d" (UID: "91a86707-3121-4bc7-8e17-521b669e030d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 01:02:10.382800 kubelet[3094]: I1101 01:02:10.382735 3094 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91a86707-3121-4bc7-8e17-521b669e030d-kube-api-access-jg5wk" (OuterVolumeSpecName: "kube-api-access-jg5wk") pod "91a86707-3121-4bc7-8e17-521b669e030d" (UID: "91a86707-3121-4bc7-8e17-521b669e030d"). InnerVolumeSpecName "kube-api-access-jg5wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:02:10.481862 kubelet[3094]: I1101 01:02:10.481741 3094 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91a86707-3121-4bc7-8e17-521b669e030d-whisker-ca-bundle\") on node \"ci-4081.3.6-n-13ad226fb7\" DevicePath \"\"" Nov 1 01:02:10.481862 kubelet[3094]: I1101 01:02:10.481818 3094 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jg5wk\" (UniqueName: \"kubernetes.io/projected/91a86707-3121-4bc7-8e17-521b669e030d-kube-api-access-jg5wk\") on node \"ci-4081.3.6-n-13ad226fb7\" DevicePath \"\"" Nov 1 01:02:10.481862 kubelet[3094]: I1101 01:02:10.481850 3094 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/91a86707-3121-4bc7-8e17-521b669e030d-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-13ad226fb7\" DevicePath \"\"" Nov 1 01:02:10.779649 systemd[1]: Removed slice kubepods-besteffort-pod91a86707_3121_4bc7_8e17_521b669e030d.slice - libcontainer container kubepods-besteffort-pod91a86707_3121_4bc7_8e17_521b669e030d.slice. Nov 1 01:02:10.885671 kubelet[3094]: I1101 01:02:10.885623 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vg458" podStartSLOduration=2.089367282 podStartE2EDuration="12.885608261s" podCreationTimestamp="2025-11-01 01:01:58 +0000 UTC" firstStartedPulling="2025-11-01 01:01:59.299628076 +0000 UTC m=+18.579764851" lastFinishedPulling="2025-11-01 01:02:10.095869057 +0000 UTC m=+29.376005830" observedRunningTime="2025-11-01 01:02:10.885153549 +0000 UTC m=+30.165290335" watchObservedRunningTime="2025-11-01 01:02:10.885608261 +0000 UTC m=+30.165745042" Nov 1 01:02:10.920620 systemd[1]: Created slice kubepods-besteffort-pod53044ffa_faec_4388_b8a9_277f38bf6718.slice - libcontainer container kubepods-besteffort-pod53044ffa_faec_4388_b8a9_277f38bf6718.slice. Nov 1 01:02:10.985715 kubelet[3094]: I1101 01:02:10.985568 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53044ffa-faec-4388-b8a9-277f38bf6718-whisker-ca-bundle\") pod \"whisker-57bf865dbd-dvkh4\" (UID: \"53044ffa-faec-4388-b8a9-277f38bf6718\") " pod="calico-system/whisker-57bf865dbd-dvkh4" Nov 1 01:02:10.985715 kubelet[3094]: I1101 01:02:10.985702 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc55k\" (UniqueName: \"kubernetes.io/projected/53044ffa-faec-4388-b8a9-277f38bf6718-kube-api-access-sc55k\") pod \"whisker-57bf865dbd-dvkh4\" (UID: \"53044ffa-faec-4388-b8a9-277f38bf6718\") " pod="calico-system/whisker-57bf865dbd-dvkh4" Nov 1 01:02:10.986109 kubelet[3094]: I1101 01:02:10.985762 3094 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/53044ffa-faec-4388-b8a9-277f38bf6718-whisker-backend-key-pair\") pod \"whisker-57bf865dbd-dvkh4\" (UID: \"53044ffa-faec-4388-b8a9-277f38bf6718\") " pod="calico-system/whisker-57bf865dbd-dvkh4" Nov 1 01:02:11.085278 systemd[1]: run-netns-cni\x2d4e7af391\x2d8d35\x2da038\x2daebb\x2d7edb9ee35571.mount: Deactivated successfully. Nov 1 01:02:11.085620 systemd[1]: var-lib-kubelet-pods-91a86707\x2d3121\x2d4bc7\x2d8e17\x2d521b669e030d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djg5wk.mount: Deactivated successfully. Nov 1 01:02:11.085958 systemd[1]: var-lib-kubelet-pods-91a86707\x2d3121\x2d4bc7\x2d8e17\x2d521b669e030d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 01:02:11.224543 containerd[1834]: time="2025-11-01T01:02:11.224495363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57bf865dbd-dvkh4,Uid:53044ffa-faec-4388-b8a9-277f38bf6718,Namespace:calico-system,Attempt:0,}" Nov 1 01:02:11.296878 systemd-networkd[1507]: cali594c0469e6e: Link UP Nov 1 01:02:11.297050 systemd-networkd[1507]: cali594c0469e6e: Gained carrier Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.238 [INFO][4747] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.246 [INFO][4747] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--13ad226fb7-k8s-whisker--57bf865dbd--dvkh4-eth0 whisker-57bf865dbd- calico-system 53044ffa-faec-4388-b8a9-277f38bf6718 876 0 2025-11-01 01:02:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57bf865dbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-13ad226fb7 whisker-57bf865dbd-dvkh4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali594c0469e6e [] [] }} ContainerID="84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" Namespace="calico-system" Pod="whisker-57bf865dbd-dvkh4" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-whisker--57bf865dbd--dvkh4-" Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.246 [INFO][4747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" Namespace="calico-system" Pod="whisker-57bf865dbd-dvkh4" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-whisker--57bf865dbd--dvkh4-eth0" Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.263 [INFO][4767] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" HandleID="k8s-pod-network.84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" Workload="ci--4081.3.6--n--13ad226fb7-k8s-whisker--57bf865dbd--dvkh4-eth0" Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.263 [INFO][4767] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" HandleID="k8s-pod-network.84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" Workload="ci--4081.3.6--n--13ad226fb7-k8s-whisker--57bf865dbd--dvkh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f670), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-13ad226fb7", "pod":"whisker-57bf865dbd-dvkh4", "timestamp":"2025-11-01 01:02:11.26371077 +0000 UTC"}, Hostname:"ci-4081.3.6-n-13ad226fb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.263 [INFO][4767] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.263 [INFO][4767] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.263 [INFO][4767] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-13ad226fb7' Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.270 [INFO][4767] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.274 [INFO][4767] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.278 [INFO][4767] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.280 [INFO][4767] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.282 [INFO][4767] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.282 [INFO][4767] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.283 [INFO][4767] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623 Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.286 [INFO][4767] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.289 [INFO][4767] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.193/26] block=192.168.1.192/26 handle="k8s-pod-network.84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.289 [INFO][4767] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.193/26] handle="k8s-pod-network.84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.289 [INFO][4767] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:11.305104 containerd[1834]: 2025-11-01 01:02:11.289 [INFO][4767] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.193/26] IPv6=[] ContainerID="84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" HandleID="k8s-pod-network.84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" Workload="ci--4081.3.6--n--13ad226fb7-k8s-whisker--57bf865dbd--dvkh4-eth0" Nov 1 01:02:11.305876 containerd[1834]: 2025-11-01 01:02:11.291 [INFO][4747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" Namespace="calico-system" Pod="whisker-57bf865dbd-dvkh4" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-whisker--57bf865dbd--dvkh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-whisker--57bf865dbd--dvkh4-eth0", GenerateName:"whisker-57bf865dbd-", Namespace:"calico-system", SelfLink:"", UID:"53044ffa-faec-4388-b8a9-277f38bf6718", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 2, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57bf865dbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"", Pod:"whisker-57bf865dbd-dvkh4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.1.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali594c0469e6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:11.305876 containerd[1834]: 2025-11-01 01:02:11.291 [INFO][4747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.193/32] ContainerID="84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" Namespace="calico-system" Pod="whisker-57bf865dbd-dvkh4" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-whisker--57bf865dbd--dvkh4-eth0" Nov 1 01:02:11.305876 containerd[1834]: 2025-11-01 01:02:11.291 [INFO][4747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali594c0469e6e ContainerID="84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" Namespace="calico-system" Pod="whisker-57bf865dbd-dvkh4" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-whisker--57bf865dbd--dvkh4-eth0" Nov 1 01:02:11.305876 containerd[1834]: 2025-11-01 01:02:11.297 [INFO][4747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" Namespace="calico-system" Pod="whisker-57bf865dbd-dvkh4" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-whisker--57bf865dbd--dvkh4-eth0" Nov 1 01:02:11.305876 containerd[1834]: 2025-11-01 01:02:11.297 [INFO][4747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" Namespace="calico-system" Pod="whisker-57bf865dbd-dvkh4" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-whisker--57bf865dbd--dvkh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-whisker--57bf865dbd--dvkh4-eth0", GenerateName:"whisker-57bf865dbd-", Namespace:"calico-system", SelfLink:"", UID:"53044ffa-faec-4388-b8a9-277f38bf6718", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 2, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57bf865dbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623", Pod:"whisker-57bf865dbd-dvkh4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.1.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali594c0469e6e", MAC:"0a:2f:66:03:e4:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:11.305876 containerd[1834]: 2025-11-01 01:02:11.303 [INFO][4747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623" Namespace="calico-system" Pod="whisker-57bf865dbd-dvkh4" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-whisker--57bf865dbd--dvkh4-eth0" Nov 1 01:02:11.314381 containerd[1834]: time="2025-11-01T01:02:11.314247260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:02:11.314381 containerd[1834]: time="2025-11-01T01:02:11.314293383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:02:11.314381 containerd[1834]: time="2025-11-01T01:02:11.314303508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:11.314762 containerd[1834]: time="2025-11-01T01:02:11.314668340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:11.338468 systemd[1]: Started cri-containerd-84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623.scope - libcontainer container 84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623. Nov 1 01:02:11.374709 containerd[1834]: time="2025-11-01T01:02:11.374675780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57bf865dbd-dvkh4,Uid:53044ffa-faec-4388-b8a9-277f38bf6718,Namespace:calico-system,Attempt:0,} returns sandbox id \"84417c0c309eff214ba14b61b0512ffee1dda7d7941915118de7530374645623\"" Nov 1 01:02:11.376347 containerd[1834]: time="2025-11-01T01:02:11.376324932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:02:11.429306 kernel: bpftool[4987]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 01:02:11.586242 systemd-networkd[1507]: vxlan.calico: Link UP Nov 1 01:02:11.586245 systemd-networkd[1507]: vxlan.calico: Gained carrier Nov 1 01:02:11.737513 containerd[1834]: time="2025-11-01T01:02:11.737460933Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:11.737989 containerd[1834]: time="2025-11-01T01:02:11.737930755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:02:11.737989 containerd[1834]: time="2025-11-01T01:02:11.737951598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:02:11.738084 kubelet[3094]: E1101 01:02:11.738061 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:02:11.738328 kubelet[3094]: E1101 01:02:11.738096 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:02:11.738356 kubelet[3094]: E1101 01:02:11.738200 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0af9fc2fd2874665a6a7d3eedbb1bf4a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc55k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57bf865dbd-dvkh4_calico-system(53044ffa-faec-4388-b8a9-277f38bf6718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:11.739872 containerd[1834]: time="2025-11-01T01:02:11.739860475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:02:11.877895 kubelet[3094]: I1101 01:02:11.877822 3094 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:02:12.104004 containerd[1834]: time="2025-11-01T01:02:12.103741784Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:12.115554 containerd[1834]: time="2025-11-01T01:02:12.115470178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:02:12.115554 containerd[1834]: time="2025-11-01T01:02:12.115543513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:02:12.115794 kubelet[3094]: E1101 01:02:12.115746 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:02:12.115794 kubelet[3094]: E1101 01:02:12.115775 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:02:12.115885 kubelet[3094]: E1101 01:02:12.115843 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc55k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57bf865dbd-dvkh4_calico-system(53044ffa-faec-4388-b8a9-277f38bf6718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:12.117104 kubelet[3094]: E1101 01:02:12.117062 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:02:12.673765 systemd-networkd[1507]: vxlan.calico: Gained IPv6LL Nov 1 01:02:12.775368 kubelet[3094]: I1101 01:02:12.775264 3094 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91a86707-3121-4bc7-8e17-521b669e030d" path="/var/lib/kubelet/pods/91a86707-3121-4bc7-8e17-521b669e030d/volumes" Nov 1 01:02:12.883813 kubelet[3094]: E1101 01:02:12.883698 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:02:13.313545 systemd-networkd[1507]: cali594c0469e6e: Gained IPv6LL Nov 1 01:02:17.769920 containerd[1834]: time="2025-11-01T01:02:17.769834773Z" level=info msg="StopPodSandbox for \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\"" Nov 1 01:02:17.770931 containerd[1834]: time="2025-11-01T01:02:17.770084743Z" level=info msg="StopPodSandbox for \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\"" Nov 1 01:02:17.861684 containerd[1834]: 2025-11-01 01:02:17.838 [INFO][5128] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Nov 1 01:02:17.861684 containerd[1834]: 2025-11-01 01:02:17.838 [INFO][5128] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" iface="eth0" netns="/var/run/netns/cni-13dc6073-5505-ceb4-9ea1-0e69e0168dd6" Nov 1 01:02:17.861684 containerd[1834]: 2025-11-01 01:02:17.838 [INFO][5128] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" iface="eth0" netns="/var/run/netns/cni-13dc6073-5505-ceb4-9ea1-0e69e0168dd6" Nov 1 01:02:17.861684 containerd[1834]: 2025-11-01 01:02:17.842 [INFO][5128] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" iface="eth0" netns="/var/run/netns/cni-13dc6073-5505-ceb4-9ea1-0e69e0168dd6" Nov 1 01:02:17.861684 containerd[1834]: 2025-11-01 01:02:17.842 [INFO][5128] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Nov 1 01:02:17.861684 containerd[1834]: 2025-11-01 01:02:17.842 [INFO][5128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Nov 1 01:02:17.861684 containerd[1834]: 2025-11-01 01:02:17.854 [INFO][5165] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" HandleID="k8s-pod-network.2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:17.861684 containerd[1834]: 2025-11-01 01:02:17.854 [INFO][5165] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:17.861684 containerd[1834]: 2025-11-01 01:02:17.854 [INFO][5165] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:17.861684 containerd[1834]: 2025-11-01 01:02:17.859 [WARNING][5165] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" HandleID="k8s-pod-network.2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:17.861684 containerd[1834]: 2025-11-01 01:02:17.859 [INFO][5165] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" HandleID="k8s-pod-network.2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:17.861684 containerd[1834]: 2025-11-01 01:02:17.860 [INFO][5165] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:17.861684 containerd[1834]: 2025-11-01 01:02:17.860 [INFO][5128] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Nov 1 01:02:17.862020 containerd[1834]: time="2025-11-01T01:02:17.861767268Z" level=info msg="TearDown network for sandbox \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\" successfully" Nov 1 01:02:17.862020 containerd[1834]: time="2025-11-01T01:02:17.861785699Z" level=info msg="StopPodSandbox for \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\" returns successfully" Nov 1 01:02:17.862227 containerd[1834]: time="2025-11-01T01:02:17.862206316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b6jr8,Uid:204dfdb6-7331-4950-b1cc-50b35a251b49,Namespace:kube-system,Attempt:1,}" Nov 1 01:02:17.863547 systemd[1]: run-netns-cni\x2d13dc6073\x2d5505\x2dceb4\x2d9ea1\x2d0e69e0168dd6.mount: Deactivated successfully. Nov 1 01:02:17.866165 containerd[1834]: 2025-11-01 01:02:17.841 [INFO][5129] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Nov 1 01:02:17.866165 containerd[1834]: 2025-11-01 01:02:17.841 [INFO][5129] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" iface="eth0" netns="/var/run/netns/cni-fb25dfad-2169-9280-76b7-3fd2bd679a7a" Nov 1 01:02:17.866165 containerd[1834]: 2025-11-01 01:02:17.842 [INFO][5129] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" iface="eth0" netns="/var/run/netns/cni-fb25dfad-2169-9280-76b7-3fd2bd679a7a" Nov 1 01:02:17.866165 containerd[1834]: 2025-11-01 01:02:17.842 [INFO][5129] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" iface="eth0" netns="/var/run/netns/cni-fb25dfad-2169-9280-76b7-3fd2bd679a7a" Nov 1 01:02:17.866165 containerd[1834]: 2025-11-01 01:02:17.842 [INFO][5129] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Nov 1 01:02:17.866165 containerd[1834]: 2025-11-01 01:02:17.842 [INFO][5129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Nov 1 01:02:17.866165 containerd[1834]: 2025-11-01 01:02:17.854 [INFO][5167] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" HandleID="k8s-pod-network.8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:17.866165 containerd[1834]: 2025-11-01 01:02:17.855 [INFO][5167] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:17.866165 containerd[1834]: 2025-11-01 01:02:17.860 [INFO][5167] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:17.866165 containerd[1834]: 2025-11-01 01:02:17.863 [WARNING][5167] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" HandleID="k8s-pod-network.8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:17.866165 containerd[1834]: 2025-11-01 01:02:17.863 [INFO][5167] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" HandleID="k8s-pod-network.8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:17.866165 containerd[1834]: 2025-11-01 01:02:17.864 [INFO][5167] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:17.866165 containerd[1834]: 2025-11-01 01:02:17.865 [INFO][5129] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Nov 1 01:02:17.866455 containerd[1834]: time="2025-11-01T01:02:17.866233683Z" level=info msg="TearDown network for sandbox \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\" successfully" Nov 1 01:02:17.866455 containerd[1834]: time="2025-11-01T01:02:17.866246644Z" level=info msg="StopPodSandbox for \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\" returns successfully" Nov 1 01:02:17.866605 containerd[1834]: time="2025-11-01T01:02:17.866590184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b64f6597-jkbn2,Uid:61ef21f2-b413-4ed0-8572-35cbc407e679,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:02:17.867781 systemd[1]: run-netns-cni\x2dfb25dfad\x2d2169\x2d9280\x2d76b7\x2d3fd2bd679a7a.mount: Deactivated successfully. Nov 1 01:02:17.933577 systemd-networkd[1507]: cali51bb174985e: Link UP Nov 1 01:02:17.933735 systemd-networkd[1507]: cali51bb174985e: Gained carrier Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.898 [INFO][5200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0 coredns-674b8bbfcf- kube-system 204dfdb6-7331-4950-b1cc-50b35a251b49 910 0 2025-11-01 01:01:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-13ad226fb7 coredns-674b8bbfcf-b6jr8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali51bb174985e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6jr8" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-" Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.898 [INFO][5200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6jr8" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.911 [INFO][5245] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" HandleID="k8s-pod-network.e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.911 [INFO][5245] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" HandleID="k8s-pod-network.e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7790), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-13ad226fb7", "pod":"coredns-674b8bbfcf-b6jr8", "timestamp":"2025-11-01 01:02:17.911309925 +0000 UTC"}, Hostname:"ci-4081.3.6-n-13ad226fb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.911 [INFO][5245] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.911 [INFO][5245] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.911 [INFO][5245] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-13ad226fb7' Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.916 [INFO][5245] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.919 [INFO][5245] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.922 [INFO][5245] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.924 [INFO][5245] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.925 [INFO][5245] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.925 [INFO][5245] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.926 [INFO][5245] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919 Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.929 [INFO][5245] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.931 [INFO][5245] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.194/26] block=192.168.1.192/26 handle="k8s-pod-network.e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.931 [INFO][5245] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.194/26] handle="k8s-pod-network.e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.931 [INFO][5245] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:17.939958 containerd[1834]: 2025-11-01 01:02:17.931 [INFO][5245] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.194/26] IPv6=[] ContainerID="e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" HandleID="k8s-pod-network.e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:17.940402 containerd[1834]: 2025-11-01 01:02:17.932 [INFO][5200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6jr8" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"204dfdb6-7331-4950-b1cc-50b35a251b49", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"", Pod:"coredns-674b8bbfcf-b6jr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51bb174985e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:17.940402 containerd[1834]: 2025-11-01 01:02:17.932 [INFO][5200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.194/32] ContainerID="e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6jr8" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:17.940402 containerd[1834]: 2025-11-01 01:02:17.932 [INFO][5200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51bb174985e ContainerID="e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6jr8" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:17.940402 containerd[1834]: 2025-11-01 01:02:17.933 [INFO][5200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6jr8" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:17.940402 containerd[1834]: 2025-11-01 01:02:17.934 [INFO][5200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6jr8" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"204dfdb6-7331-4950-b1cc-50b35a251b49", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919", Pod:"coredns-674b8bbfcf-b6jr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51bb174985e", MAC:"46:69:d6:36:fd:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:17.940402 containerd[1834]: 2025-11-01 01:02:17.938 [INFO][5200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919" Namespace="kube-system" Pod="coredns-674b8bbfcf-b6jr8" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:17.948140 containerd[1834]: time="2025-11-01T01:02:17.947899198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:02:17.948140 containerd[1834]: time="2025-11-01T01:02:17.948099088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:02:17.948140 containerd[1834]: time="2025-11-01T01:02:17.948107167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:17.948263 containerd[1834]: time="2025-11-01T01:02:17.948152244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:17.972512 systemd[1]: Started cri-containerd-e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919.scope - libcontainer container e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919. Nov 1 01:02:17.999598 containerd[1834]: time="2025-11-01T01:02:17.999571501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b6jr8,Uid:204dfdb6-7331-4950-b1cc-50b35a251b49,Namespace:kube-system,Attempt:1,} returns sandbox id \"e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919\"" Nov 1 01:02:18.001686 containerd[1834]: time="2025-11-01T01:02:18.001669870Z" level=info msg="CreateContainer within sandbox \"e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:02:18.005743 containerd[1834]: time="2025-11-01T01:02:18.005730126Z" level=info msg="CreateContainer within sandbox \"e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"606e02ed8a9c16c9ad9459105d48e2ad0ee6aa2383aecd2bf7d01ebe3beb4891\"" Nov 1 01:02:18.005935 containerd[1834]: time="2025-11-01T01:02:18.005925937Z" level=info msg="StartContainer for \"606e02ed8a9c16c9ad9459105d48e2ad0ee6aa2383aecd2bf7d01ebe3beb4891\"" Nov 1 01:02:18.031069 systemd-networkd[1507]: cali4e417b6d8fd: Link UP Nov 1 01:02:18.031316 systemd-networkd[1507]: cali4e417b6d8fd: Gained carrier Nov 1 01:02:18.032195 systemd[1]: Started cri-containerd-606e02ed8a9c16c9ad9459105d48e2ad0ee6aa2383aecd2bf7d01ebe3beb4891.scope - libcontainer container 606e02ed8a9c16c9ad9459105d48e2ad0ee6aa2383aecd2bf7d01ebe3beb4891. Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:17.899 [INFO][5201] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0 calico-apiserver-65b64f6597- calico-apiserver 61ef21f2-b413-4ed0-8572-35cbc407e679 911 0 2025-11-01 01:01:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65b64f6597 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-13ad226fb7 calico-apiserver-65b64f6597-jkbn2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e417b6d8fd [] [] }} ContainerID="30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-jkbn2" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-" Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:17.899 [INFO][5201] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-jkbn2" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:17.911 [INFO][5247] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" HandleID="k8s-pod-network.30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:17.911 [INFO][5247] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" HandleID="k8s-pod-network.30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-13ad226fb7", "pod":"calico-apiserver-65b64f6597-jkbn2", "timestamp":"2025-11-01 01:02:17.911309994 +0000 UTC"}, Hostname:"ci-4081.3.6-n-13ad226fb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:17.911 [INFO][5247] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:17.931 [INFO][5247] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:17.931 [INFO][5247] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-13ad226fb7' Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:18.017 [INFO][5247] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:18.019 [INFO][5247] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:18.021 [INFO][5247] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:18.022 [INFO][5247] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:18.023 [INFO][5247] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:18.023 [INFO][5247] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:18.024 [INFO][5247] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60 Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:18.026 [INFO][5247] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:18.029 [INFO][5247] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.195/26] block=192.168.1.192/26 handle="k8s-pod-network.30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:18.029 [INFO][5247] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.195/26] handle="k8s-pod-network.30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:18.029 [INFO][5247] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:18.039147 containerd[1834]: 2025-11-01 01:02:18.029 [INFO][5247] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.195/26] IPv6=[] ContainerID="30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" HandleID="k8s-pod-network.30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:18.039664 containerd[1834]: 2025-11-01 01:02:18.030 [INFO][5201] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-jkbn2" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0", GenerateName:"calico-apiserver-65b64f6597-", Namespace:"calico-apiserver", SelfLink:"", UID:"61ef21f2-b413-4ed0-8572-35cbc407e679", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b64f6597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"", Pod:"calico-apiserver-65b64f6597-jkbn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e417b6d8fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:18.039664 containerd[1834]: 2025-11-01 01:02:18.030 [INFO][5201] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.195/32] ContainerID="30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-jkbn2" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:18.039664 containerd[1834]: 2025-11-01 01:02:18.030 [INFO][5201] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e417b6d8fd ContainerID="30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-jkbn2" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:18.039664 containerd[1834]: 2025-11-01 01:02:18.031 [INFO][5201] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-jkbn2" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:18.039664 containerd[1834]: 2025-11-01 01:02:18.032 [INFO][5201] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-jkbn2" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0", GenerateName:"calico-apiserver-65b64f6597-", Namespace:"calico-apiserver", SelfLink:"", UID:"61ef21f2-b413-4ed0-8572-35cbc407e679", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b64f6597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60", Pod:"calico-apiserver-65b64f6597-jkbn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e417b6d8fd", MAC:"da:d1:7e:64:d1:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:18.039664 containerd[1834]: 2025-11-01 01:02:18.037 [INFO][5201] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-jkbn2" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:18.044170 containerd[1834]: time="2025-11-01T01:02:18.044148099Z" level=info msg="StartContainer for \"606e02ed8a9c16c9ad9459105d48e2ad0ee6aa2383aecd2bf7d01ebe3beb4891\" returns successfully" Nov 1 01:02:18.048421 containerd[1834]: time="2025-11-01T01:02:18.048366513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:02:18.048421 containerd[1834]: time="2025-11-01T01:02:18.048409733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:02:18.048421 containerd[1834]: time="2025-11-01T01:02:18.048421336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:18.048521 containerd[1834]: time="2025-11-01T01:02:18.048477670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:18.069520 systemd[1]: Started cri-containerd-30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60.scope - libcontainer container 30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60. Nov 1 01:02:18.114087 containerd[1834]: time="2025-11-01T01:02:18.114058976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b64f6597-jkbn2,Uid:61ef21f2-b413-4ed0-8572-35cbc407e679,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60\"" Nov 1 01:02:18.115030 containerd[1834]: time="2025-11-01T01:02:18.114987002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:02:18.465059 containerd[1834]: time="2025-11-01T01:02:18.464930124Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:18.465833 containerd[1834]: time="2025-11-01T01:02:18.465810950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:02:18.465962 containerd[1834]: time="2025-11-01T01:02:18.465875221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:02:18.466043 kubelet[3094]: E1101 01:02:18.465998 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:18.466043 kubelet[3094]: E1101 01:02:18.466028 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:18.466334 kubelet[3094]: E1101 01:02:18.466107 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9628,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b64f6597-jkbn2_calico-apiserver(61ef21f2-b413-4ed0-8572-35cbc407e679): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:18.467332 kubelet[3094]: E1101 01:02:18.467264 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:02:18.769881 containerd[1834]: time="2025-11-01T01:02:18.769627216Z" level=info msg="StopPodSandbox for \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\"" Nov 1 01:02:18.769881 containerd[1834]: time="2025-11-01T01:02:18.769626952Z" level=info msg="StopPodSandbox for \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\"" Nov 1 01:02:18.817872 containerd[1834]: 2025-11-01 01:02:18.797 [INFO][5449] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Nov 1 01:02:18.817872 containerd[1834]: 2025-11-01 01:02:18.797 [INFO][5449] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" iface="eth0" netns="/var/run/netns/cni-b7d45af3-6274-9133-b4a8-32ca0abcc2d7" Nov 1 01:02:18.817872 containerd[1834]: 2025-11-01 01:02:18.797 [INFO][5449] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" iface="eth0" netns="/var/run/netns/cni-b7d45af3-6274-9133-b4a8-32ca0abcc2d7" Nov 1 01:02:18.817872 containerd[1834]: 2025-11-01 01:02:18.797 [INFO][5449] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" iface="eth0" netns="/var/run/netns/cni-b7d45af3-6274-9133-b4a8-32ca0abcc2d7" Nov 1 01:02:18.817872 containerd[1834]: 2025-11-01 01:02:18.797 [INFO][5449] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Nov 1 01:02:18.817872 containerd[1834]: 2025-11-01 01:02:18.797 [INFO][5449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Nov 1 01:02:18.817872 containerd[1834]: 2025-11-01 01:02:18.810 [INFO][5482] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" HandleID="k8s-pod-network.425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Workload="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:18.817872 containerd[1834]: 2025-11-01 01:02:18.810 [INFO][5482] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:18.817872 containerd[1834]: 2025-11-01 01:02:18.810 [INFO][5482] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:18.817872 containerd[1834]: 2025-11-01 01:02:18.815 [WARNING][5482] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" HandleID="k8s-pod-network.425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Workload="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:18.817872 containerd[1834]: 2025-11-01 01:02:18.815 [INFO][5482] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" HandleID="k8s-pod-network.425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Workload="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:18.817872 containerd[1834]: 2025-11-01 01:02:18.816 [INFO][5482] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:18.817872 containerd[1834]: 2025-11-01 01:02:18.817 [INFO][5449] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Nov 1 01:02:18.818210 containerd[1834]: time="2025-11-01T01:02:18.817963778Z" level=info msg="TearDown network for sandbox \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\" successfully" Nov 1 01:02:18.818210 containerd[1834]: time="2025-11-01T01:02:18.817981641Z" level=info msg="StopPodSandbox for \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\" returns successfully" Nov 1 01:02:18.818464 containerd[1834]: time="2025-11-01T01:02:18.818430056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4gtnc,Uid:f10f5e42-eb9a-47ab-8781-8e9dfee85efa,Namespace:calico-system,Attempt:1,}" Nov 1 01:02:18.823434 containerd[1834]: 2025-11-01 01:02:18.797 [INFO][5450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Nov 1 01:02:18.823434 containerd[1834]: 2025-11-01 01:02:18.797 [INFO][5450] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" iface="eth0" netns="/var/run/netns/cni-35ddd954-1c81-9c10-3400-5620d6d66203" Nov 1 01:02:18.823434 containerd[1834]: 2025-11-01 01:02:18.797 [INFO][5450] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" iface="eth0" netns="/var/run/netns/cni-35ddd954-1c81-9c10-3400-5620d6d66203" Nov 1 01:02:18.823434 containerd[1834]: 2025-11-01 01:02:18.797 [INFO][5450] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" iface="eth0" netns="/var/run/netns/cni-35ddd954-1c81-9c10-3400-5620d6d66203" Nov 1 01:02:18.823434 containerd[1834]: 2025-11-01 01:02:18.797 [INFO][5450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Nov 1 01:02:18.823434 containerd[1834]: 2025-11-01 01:02:18.797 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Nov 1 01:02:18.823434 containerd[1834]: 2025-11-01 01:02:18.810 [INFO][5484] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" HandleID="k8s-pod-network.4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Workload="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:18.823434 containerd[1834]: 2025-11-01 01:02:18.810 [INFO][5484] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:18.823434 containerd[1834]: 2025-11-01 01:02:18.816 [INFO][5484] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:18.823434 containerd[1834]: 2025-11-01 01:02:18.820 [WARNING][5484] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" HandleID="k8s-pod-network.4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Workload="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:18.823434 containerd[1834]: 2025-11-01 01:02:18.820 [INFO][5484] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" HandleID="k8s-pod-network.4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Workload="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:18.823434 containerd[1834]: 2025-11-01 01:02:18.821 [INFO][5484] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:18.823434 containerd[1834]: 2025-11-01 01:02:18.822 [INFO][5450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Nov 1 01:02:18.823743 containerd[1834]: time="2025-11-01T01:02:18.823477076Z" level=info msg="TearDown network for sandbox \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\" successfully" Nov 1 01:02:18.823743 containerd[1834]: time="2025-11-01T01:02:18.823489026Z" level=info msg="StopPodSandbox for \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\" returns successfully" Nov 1 01:02:18.823968 containerd[1834]: time="2025-11-01T01:02:18.823921216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vnfp,Uid:1c2067e6-df38-44d0-9df8-192be51b26fc,Namespace:calico-system,Attempt:1,}" Nov 1 01:02:18.868861 systemd[1]: run-netns-cni\x2d35ddd954\x2d1c81\x2d9c10\x2d3400\x2d5620d6d66203.mount: Deactivated successfully. Nov 1 01:02:18.868944 systemd[1]: run-netns-cni\x2db7d45af3\x2d6274\x2d9133\x2db4a8\x2d32ca0abcc2d7.mount: Deactivated successfully. Nov 1 01:02:18.879408 systemd-networkd[1507]: calic127a7d4425: Link UP Nov 1 01:02:18.879522 systemd-networkd[1507]: calic127a7d4425: Gained carrier Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.840 [INFO][5517] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0 goldmane-666569f655- calico-system f10f5e42-eb9a-47ab-8781-8e9dfee85efa 928 0 2025-11-01 01:01:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-13ad226fb7 goldmane-666569f655-4gtnc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic127a7d4425 [] [] }} ContainerID="0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" Namespace="calico-system" Pod="goldmane-666569f655-4gtnc" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-" Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.840 [INFO][5517] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" Namespace="calico-system" Pod="goldmane-666569f655-4gtnc" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.853 [INFO][5540] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" HandleID="k8s-pod-network.0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" Workload="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.853 [INFO][5540] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" HandleID="k8s-pod-network.0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" Workload="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026f7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-13ad226fb7", "pod":"goldmane-666569f655-4gtnc", "timestamp":"2025-11-01 01:02:18.85388527 +0000 UTC"}, Hostname:"ci-4081.3.6-n-13ad226fb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.854 [INFO][5540] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.854 [INFO][5540] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.854 [INFO][5540] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-13ad226fb7' Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.858 [INFO][5540] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.860 [INFO][5540] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.862 [INFO][5540] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.863 [INFO][5540] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.864 [INFO][5540] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.864 [INFO][5540] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.865 [INFO][5540] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26 Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.874 [INFO][5540] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.877 [INFO][5540] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.196/26] block=192.168.1.192/26 handle="k8s-pod-network.0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.877 [INFO][5540] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.196/26] handle="k8s-pod-network.0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.877 [INFO][5540] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:18.885244 containerd[1834]: 2025-11-01 01:02:18.877 [INFO][5540] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.196/26] IPv6=[] ContainerID="0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" HandleID="k8s-pod-network.0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" Workload="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:18.885638 containerd[1834]: 2025-11-01 01:02:18.878 [INFO][5517] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" Namespace="calico-system" Pod="goldmane-666569f655-4gtnc" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f10f5e42-eb9a-47ab-8781-8e9dfee85efa", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"", Pod:"goldmane-666569f655-4gtnc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.1.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic127a7d4425", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:18.885638 containerd[1834]: 2025-11-01 01:02:18.878 [INFO][5517] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.196/32] ContainerID="0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" Namespace="calico-system" Pod="goldmane-666569f655-4gtnc" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:18.885638 containerd[1834]: 2025-11-01 01:02:18.878 [INFO][5517] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic127a7d4425 ContainerID="0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" Namespace="calico-system" Pod="goldmane-666569f655-4gtnc" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:18.885638 containerd[1834]: 2025-11-01 01:02:18.879 [INFO][5517] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" Namespace="calico-system" Pod="goldmane-666569f655-4gtnc" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:18.885638 containerd[1834]: 2025-11-01 01:02:18.879 [INFO][5517] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" Namespace="calico-system" Pod="goldmane-666569f655-4gtnc" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f10f5e42-eb9a-47ab-8781-8e9dfee85efa", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26", Pod:"goldmane-666569f655-4gtnc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.1.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic127a7d4425", MAC:"32:2b:dc:78:57:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:18.885638 containerd[1834]: 2025-11-01 01:02:18.884 [INFO][5517] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26" Namespace="calico-system" Pod="goldmane-666569f655-4gtnc" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:18.893949 containerd[1834]: time="2025-11-01T01:02:18.893894914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:02:18.894147 containerd[1834]: time="2025-11-01T01:02:18.894131416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:02:18.894147 containerd[1834]: time="2025-11-01T01:02:18.894142506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:18.894200 containerd[1834]: time="2025-11-01T01:02:18.894189798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:18.895984 kubelet[3094]: E1101 01:02:18.895956 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:02:18.906211 kubelet[3094]: I1101 01:02:18.906170 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-b6jr8" podStartSLOduration=32.906153655 podStartE2EDuration="32.906153655s" podCreationTimestamp="2025-11-01 01:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:02:18.905887956 +0000 UTC m=+38.186024733" watchObservedRunningTime="2025-11-01 01:02:18.906153655 +0000 UTC m=+38.186290441" Nov 1 01:02:18.920597 systemd[1]: Started cri-containerd-0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26.scope - libcontainer container 0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26. Nov 1 01:02:18.942757 containerd[1834]: time="2025-11-01T01:02:18.942711800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4gtnc,Uid:f10f5e42-eb9a-47ab-8781-8e9dfee85efa,Namespace:calico-system,Attempt:1,} returns sandbox id \"0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26\"" Nov 1 01:02:18.943403 containerd[1834]: time="2025-11-01T01:02:18.943391299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:02:19.022327 systemd-networkd[1507]: calic41e6d2f214: Link UP Nov 1 01:02:19.023162 systemd-networkd[1507]: calic41e6d2f214: Gained carrier Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:18.861 [INFO][5545] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0 csi-node-driver- calico-system 1c2067e6-df38-44d0-9df8-192be51b26fc 929 0 2025-11-01 01:01:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-13ad226fb7 csi-node-driver-9vnfp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic41e6d2f214 [] [] }} ContainerID="04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" Namespace="calico-system" Pod="csi-node-driver-9vnfp" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-" Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:18.861 [INFO][5545] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" Namespace="calico-system" Pod="csi-node-driver-9vnfp" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:18.874 [INFO][5578] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" HandleID="k8s-pod-network.04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" Workload="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:18.874 [INFO][5578] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" HandleID="k8s-pod-network.04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" Workload="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039a760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-13ad226fb7", "pod":"csi-node-driver-9vnfp", "timestamp":"2025-11-01 01:02:18.87444285 +0000 UTC"}, Hostname:"ci-4081.3.6-n-13ad226fb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:18.874 [INFO][5578] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:18.877 [INFO][5578] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:18.877 [INFO][5578] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-13ad226fb7' Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:18.961 [INFO][5578] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:18.970 [INFO][5578] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:18.980 [INFO][5578] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:18.986 [INFO][5578] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:18.991 [INFO][5578] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:18.992 [INFO][5578] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:18.996 [INFO][5578] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:19.002 [INFO][5578] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:19.014 [INFO][5578] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.197/26] block=192.168.1.192/26 handle="k8s-pod-network.04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:19.014 [INFO][5578] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.197/26] handle="k8s-pod-network.04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:19.014 [INFO][5578] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:19.049815 containerd[1834]: 2025-11-01 01:02:19.014 [INFO][5578] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.197/26] IPv6=[] ContainerID="04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" HandleID="k8s-pod-network.04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" Workload="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:19.051262 containerd[1834]: 2025-11-01 01:02:19.018 [INFO][5545] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" Namespace="calico-system" Pod="csi-node-driver-9vnfp" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1c2067e6-df38-44d0-9df8-192be51b26fc", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"", Pod:"csi-node-driver-9vnfp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic41e6d2f214", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:19.051262 containerd[1834]: 2025-11-01 01:02:19.019 [INFO][5545] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.197/32] ContainerID="04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" Namespace="calico-system" Pod="csi-node-driver-9vnfp" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:19.051262 containerd[1834]: 2025-11-01 01:02:19.019 [INFO][5545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic41e6d2f214 ContainerID="04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" Namespace="calico-system" Pod="csi-node-driver-9vnfp" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:19.051262 containerd[1834]: 2025-11-01 01:02:19.022 [INFO][5545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" Namespace="calico-system" Pod="csi-node-driver-9vnfp" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:19.051262 containerd[1834]: 2025-11-01 01:02:19.026 [INFO][5545] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" Namespace="calico-system" Pod="csi-node-driver-9vnfp" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1c2067e6-df38-44d0-9df8-192be51b26fc", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f", Pod:"csi-node-driver-9vnfp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic41e6d2f214", MAC:"fe:80:29:26:5f:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:19.051262 containerd[1834]: 2025-11-01 01:02:19.045 [INFO][5545] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f" Namespace="calico-system" Pod="csi-node-driver-9vnfp" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:19.061297 containerd[1834]: time="2025-11-01T01:02:19.061015177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:02:19.061297 containerd[1834]: time="2025-11-01T01:02:19.061262181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:02:19.061297 containerd[1834]: time="2025-11-01T01:02:19.061274251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:19.061415 containerd[1834]: time="2025-11-01T01:02:19.061317658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:19.080857 systemd[1]: Started cri-containerd-04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f.scope - libcontainer container 04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f. Nov 1 01:02:19.133848 containerd[1834]: time="2025-11-01T01:02:19.133757691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9vnfp,Uid:1c2067e6-df38-44d0-9df8-192be51b26fc,Namespace:calico-system,Attempt:1,} returns sandbox id \"04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f\"" Nov 1 01:02:19.201555 systemd-networkd[1507]: cali4e417b6d8fd: Gained IPv6LL Nov 1 01:02:19.277887 containerd[1834]: time="2025-11-01T01:02:19.277654724Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:19.278566 containerd[1834]: time="2025-11-01T01:02:19.278538542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:02:19.278633 containerd[1834]: time="2025-11-01T01:02:19.278608141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:02:19.278721 kubelet[3094]: E1101 01:02:19.278701 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:02:19.278775 kubelet[3094]: E1101 01:02:19.278730 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:02:19.278893 kubelet[3094]: E1101 01:02:19.278863 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqkcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4gtnc_calico-system(f10f5e42-eb9a-47ab-8781-8e9dfee85efa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:19.279000 containerd[1834]: time="2025-11-01T01:02:19.278894869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:02:19.280011 kubelet[3094]: E1101 01:02:19.279996 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:02:19.632290 containerd[1834]: time="2025-11-01T01:02:19.632002673Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:19.633062 containerd[1834]: time="2025-11-01T01:02:19.633034721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:02:19.633124 containerd[1834]: time="2025-11-01T01:02:19.633094479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:02:19.633318 kubelet[3094]: E1101 01:02:19.633262 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:02:19.633480 kubelet[3094]: E1101 01:02:19.633327 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:02:19.633480 kubelet[3094]: E1101 01:02:19.633405 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzr2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vnfp_calico-system(1c2067e6-df38-44d0-9df8-192be51b26fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:19.635685 containerd[1834]: time="2025-11-01T01:02:19.635658248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:02:19.770613 containerd[1834]: time="2025-11-01T01:02:19.770513982Z" level=info msg="StopPodSandbox for \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\"" Nov 1 01:02:19.844737 containerd[1834]: 2025-11-01 01:02:19.800 [INFO][5709] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Nov 1 01:02:19.844737 containerd[1834]: 2025-11-01 01:02:19.800 [INFO][5709] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" iface="eth0" netns="/var/run/netns/cni-2bc507ef-872e-b41c-af85-8eaf69946e8a" Nov 1 01:02:19.844737 containerd[1834]: 2025-11-01 01:02:19.801 [INFO][5709] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" iface="eth0" netns="/var/run/netns/cni-2bc507ef-872e-b41c-af85-8eaf69946e8a" Nov 1 01:02:19.844737 containerd[1834]: 2025-11-01 01:02:19.801 [INFO][5709] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" iface="eth0" netns="/var/run/netns/cni-2bc507ef-872e-b41c-af85-8eaf69946e8a" Nov 1 01:02:19.844737 containerd[1834]: 2025-11-01 01:02:19.801 [INFO][5709] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Nov 1 01:02:19.844737 containerd[1834]: 2025-11-01 01:02:19.801 [INFO][5709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Nov 1 01:02:19.844737 containerd[1834]: 2025-11-01 01:02:19.838 [INFO][5725] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" HandleID="k8s-pod-network.7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:19.844737 containerd[1834]: 2025-11-01 01:02:19.838 [INFO][5725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:19.844737 containerd[1834]: 2025-11-01 01:02:19.838 [INFO][5725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:19.844737 containerd[1834]: 2025-11-01 01:02:19.842 [WARNING][5725] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" HandleID="k8s-pod-network.7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:19.844737 containerd[1834]: 2025-11-01 01:02:19.842 [INFO][5725] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" HandleID="k8s-pod-network.7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:19.844737 containerd[1834]: 2025-11-01 01:02:19.843 [INFO][5725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:19.844737 containerd[1834]: 2025-11-01 01:02:19.843 [INFO][5709] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Nov 1 01:02:19.845020 containerd[1834]: time="2025-11-01T01:02:19.844778266Z" level=info msg="TearDown network for sandbox \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\" successfully" Nov 1 01:02:19.845020 containerd[1834]: time="2025-11-01T01:02:19.844794859Z" level=info msg="StopPodSandbox for \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\" returns successfully" Nov 1 01:02:19.845154 containerd[1834]: time="2025-11-01T01:02:19.845142699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59cbdf9dd7-589d7,Uid:66631db9-6f47-4e8c-8fde-e00b56c3ece6,Namespace:calico-system,Attempt:1,}" Nov 1 01:02:19.865920 systemd[1]: run-netns-cni\x2d2bc507ef\x2d872e\x2db41c\x2daf85\x2d8eaf69946e8a.mount: Deactivated successfully. Nov 1 01:02:19.899125 kubelet[3094]: E1101 01:02:19.899055 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:02:19.899263 kubelet[3094]: E1101 01:02:19.899234 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:02:19.899730 systemd-networkd[1507]: cali7ff2b71a6d5: Link UP Nov 1 01:02:19.899936 systemd-networkd[1507]: cali7ff2b71a6d5: Gained carrier Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.864 [INFO][5743] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0 calico-kube-controllers-59cbdf9dd7- calico-system 66631db9-6f47-4e8c-8fde-e00b56c3ece6 957 0 2025-11-01 01:01:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59cbdf9dd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-13ad226fb7 calico-kube-controllers-59cbdf9dd7-589d7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7ff2b71a6d5 [] [] }} ContainerID="749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" Namespace="calico-system" Pod="calico-kube-controllers-59cbdf9dd7-589d7" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-" Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.864 [INFO][5743] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" Namespace="calico-system" Pod="calico-kube-controllers-59cbdf9dd7-589d7" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.876 [INFO][5765] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" HandleID="k8s-pod-network.749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.876 [INFO][5765] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" HandleID="k8s-pod-network.749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000700130), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-13ad226fb7", "pod":"calico-kube-controllers-59cbdf9dd7-589d7", "timestamp":"2025-11-01 01:02:19.876659065 +0000 UTC"}, Hostname:"ci-4081.3.6-n-13ad226fb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.876 [INFO][5765] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.876 [INFO][5765] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.876 [INFO][5765] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-13ad226fb7' Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.881 [INFO][5765] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.884 [INFO][5765] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.887 [INFO][5765] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.888 [INFO][5765] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.890 [INFO][5765] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.890 [INFO][5765] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.891 [INFO][5765] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.894 [INFO][5765] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.897 [INFO][5765] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.198/26] block=192.168.1.192/26 handle="k8s-pod-network.749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.897 [INFO][5765] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.198/26] handle="k8s-pod-network.749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.897 [INFO][5765] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:19.907461 containerd[1834]: 2025-11-01 01:02:19.897 [INFO][5765] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.198/26] IPv6=[] ContainerID="749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" HandleID="k8s-pod-network.749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:19.908008 containerd[1834]: 2025-11-01 01:02:19.898 [INFO][5743] cni-plugin/k8s.go 418: Populated endpoint ContainerID="749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" Namespace="calico-system" Pod="calico-kube-controllers-59cbdf9dd7-589d7" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0", GenerateName:"calico-kube-controllers-59cbdf9dd7-", Namespace:"calico-system", SelfLink:"", UID:"66631db9-6f47-4e8c-8fde-e00b56c3ece6", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59cbdf9dd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"", Pod:"calico-kube-controllers-59cbdf9dd7-589d7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7ff2b71a6d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:19.908008 containerd[1834]: 2025-11-01 01:02:19.898 [INFO][5743] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.198/32] ContainerID="749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" Namespace="calico-system" Pod="calico-kube-controllers-59cbdf9dd7-589d7" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:19.908008 containerd[1834]: 2025-11-01 01:02:19.898 [INFO][5743] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ff2b71a6d5 ContainerID="749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" Namespace="calico-system" Pod="calico-kube-controllers-59cbdf9dd7-589d7" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:19.908008 containerd[1834]: 2025-11-01 01:02:19.900 [INFO][5743] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" Namespace="calico-system" Pod="calico-kube-controllers-59cbdf9dd7-589d7" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:19.908008 containerd[1834]: 2025-11-01 01:02:19.900 [INFO][5743] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" Namespace="calico-system" Pod="calico-kube-controllers-59cbdf9dd7-589d7" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0", GenerateName:"calico-kube-controllers-59cbdf9dd7-", Namespace:"calico-system", SelfLink:"", UID:"66631db9-6f47-4e8c-8fde-e00b56c3ece6", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59cbdf9dd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f", Pod:"calico-kube-controllers-59cbdf9dd7-589d7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7ff2b71a6d5", MAC:"b2:74:9a:dc:4d:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:19.908008 containerd[1834]: 2025-11-01 01:02:19.905 [INFO][5743] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f" Namespace="calico-system" Pod="calico-kube-controllers-59cbdf9dd7-589d7" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:19.907508 systemd-networkd[1507]: cali51bb174985e: Gained IPv6LL Nov 1 01:02:19.917449 containerd[1834]: time="2025-11-01T01:02:19.917196502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:02:19.917449 containerd[1834]: time="2025-11-01T01:02:19.917403267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:02:19.917449 containerd[1834]: time="2025-11-01T01:02:19.917411359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:19.917548 containerd[1834]: time="2025-11-01T01:02:19.917452670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:19.939400 systemd[1]: Started cri-containerd-749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f.scope - libcontainer container 749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f. Nov 1 01:02:19.961049 containerd[1834]: time="2025-11-01T01:02:19.961027725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59cbdf9dd7-589d7,Uid:66631db9-6f47-4e8c-8fde-e00b56c3ece6,Namespace:calico-system,Attempt:1,} returns sandbox id \"749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f\"" Nov 1 01:02:20.003278 containerd[1834]: time="2025-11-01T01:02:20.003148278Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:20.004177 containerd[1834]: time="2025-11-01T01:02:20.004090137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:02:20.004177 containerd[1834]: time="2025-11-01T01:02:20.004147706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:02:20.004245 kubelet[3094]: E1101 01:02:20.004225 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:02:20.004287 kubelet[3094]: E1101 01:02:20.004249 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:02:20.004385 kubelet[3094]: E1101 01:02:20.004356 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzr2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vnfp_calico-system(1c2067e6-df38-44d0-9df8-192be51b26fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:20.004467 containerd[1834]: time="2025-11-01T01:02:20.004407772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:02:20.005556 kubelet[3094]: E1101 01:02:20.005537 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:02:20.350680 containerd[1834]: time="2025-11-01T01:02:20.350551656Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:20.351600 containerd[1834]: time="2025-11-01T01:02:20.351513443Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:02:20.351600 containerd[1834]: time="2025-11-01T01:02:20.351579250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:02:20.351813 kubelet[3094]: E1101 01:02:20.351742 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:02:20.351813 kubelet[3094]: E1101 01:02:20.351792 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:02:20.351957 kubelet[3094]: E1101 01:02:20.351902 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pvl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59cbdf9dd7-589d7_calico-system(66631db9-6f47-4e8c-8fde-e00b56c3ece6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:20.353107 kubelet[3094]: E1101 01:02:20.353089 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:02:20.673729 systemd-networkd[1507]: calic127a7d4425: Gained IPv6LL Nov 1 01:02:20.769523 containerd[1834]: time="2025-11-01T01:02:20.769499856Z" level=info msg="StopPodSandbox for \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\"" Nov 1 01:02:20.850983 containerd[1834]: 2025-11-01 01:02:20.796 [INFO][5844] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Nov 1 01:02:20.850983 containerd[1834]: 2025-11-01 01:02:20.796 [INFO][5844] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" iface="eth0" netns="/var/run/netns/cni-7014da64-6be9-fcdb-23b6-1f4e833e55a6" Nov 1 01:02:20.850983 containerd[1834]: 2025-11-01 01:02:20.797 [INFO][5844] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" iface="eth0" netns="/var/run/netns/cni-7014da64-6be9-fcdb-23b6-1f4e833e55a6" Nov 1 01:02:20.850983 containerd[1834]: 2025-11-01 01:02:20.797 [INFO][5844] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" iface="eth0" netns="/var/run/netns/cni-7014da64-6be9-fcdb-23b6-1f4e833e55a6" Nov 1 01:02:20.850983 containerd[1834]: 2025-11-01 01:02:20.797 [INFO][5844] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Nov 1 01:02:20.850983 containerd[1834]: 2025-11-01 01:02:20.797 [INFO][5844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Nov 1 01:02:20.850983 containerd[1834]: 2025-11-01 01:02:20.839 [INFO][5859] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" HandleID="k8s-pod-network.9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:20.850983 containerd[1834]: 2025-11-01 01:02:20.839 [INFO][5859] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:20.850983 containerd[1834]: 2025-11-01 01:02:20.839 [INFO][5859] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:20.850983 containerd[1834]: 2025-11-01 01:02:20.846 [WARNING][5859] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" HandleID="k8s-pod-network.9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:20.850983 containerd[1834]: 2025-11-01 01:02:20.846 [INFO][5859] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" HandleID="k8s-pod-network.9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:20.850983 containerd[1834]: 2025-11-01 01:02:20.848 [INFO][5859] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:20.850983 containerd[1834]: 2025-11-01 01:02:20.849 [INFO][5844] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Nov 1 01:02:20.852090 containerd[1834]: time="2025-11-01T01:02:20.851105656Z" level=info msg="TearDown network for sandbox \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\" successfully" Nov 1 01:02:20.852090 containerd[1834]: time="2025-11-01T01:02:20.851131298Z" level=info msg="StopPodSandbox for \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\" returns successfully" Nov 1 01:02:20.852090 containerd[1834]: time="2025-11-01T01:02:20.851736785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b64f6597-6gf7n,Uid:2848d47a-0e6d-4163-bcbd-cf745e94e4c6,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:02:20.853481 systemd[1]: run-netns-cni\x2d7014da64\x2d6be9\x2dfcdb\x2d23b6\x2d1f4e833e55a6.mount: Deactivated successfully. Nov 1 01:02:20.900845 kubelet[3094]: E1101 01:02:20.900822 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:02:20.901087 kubelet[3094]: E1101 01:02:20.900937 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:02:20.901123 kubelet[3094]: E1101 01:02:20.901086 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:02:20.904764 systemd-networkd[1507]: cali8a2664faa21: Link UP Nov 1 01:02:20.904930 systemd-networkd[1507]: cali8a2664faa21: Gained carrier Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.874 [INFO][5878] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0 calico-apiserver-65b64f6597- calico-apiserver 2848d47a-0e6d-4163-bcbd-cf745e94e4c6 976 0 2025-11-01 01:01:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65b64f6597 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-13ad226fb7 calico-apiserver-65b64f6597-6gf7n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8a2664faa21 [] [] }} ContainerID="8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-6gf7n" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-" Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.874 [INFO][5878] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-6gf7n" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.886 [INFO][5900] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" HandleID="k8s-pod-network.8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.886 [INFO][5900] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" HandleID="k8s-pod-network.8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-13ad226fb7", "pod":"calico-apiserver-65b64f6597-6gf7n", "timestamp":"2025-11-01 01:02:20.886073557 +0000 UTC"}, Hostname:"ci-4081.3.6-n-13ad226fb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.886 [INFO][5900] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.886 [INFO][5900] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.886 [INFO][5900] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-13ad226fb7' Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.890 [INFO][5900] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.892 [INFO][5900] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.894 [INFO][5900] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.895 [INFO][5900] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.896 [INFO][5900] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.896 [INFO][5900] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.897 [INFO][5900] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.899 [INFO][5900] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.902 [INFO][5900] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.199/26] block=192.168.1.192/26 handle="k8s-pod-network.8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.902 [INFO][5900] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.199/26] handle="k8s-pod-network.8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.902 [INFO][5900] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:20.911918 containerd[1834]: 2025-11-01 01:02:20.902 [INFO][5900] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.199/26] IPv6=[] ContainerID="8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" HandleID="k8s-pod-network.8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:20.912348 containerd[1834]: 2025-11-01 01:02:20.903 [INFO][5878] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-6gf7n" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0", GenerateName:"calico-apiserver-65b64f6597-", Namespace:"calico-apiserver", SelfLink:"", UID:"2848d47a-0e6d-4163-bcbd-cf745e94e4c6", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b64f6597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"", Pod:"calico-apiserver-65b64f6597-6gf7n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a2664faa21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:20.912348 containerd[1834]: 2025-11-01 01:02:20.903 [INFO][5878] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.199/32] ContainerID="8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-6gf7n" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:20.912348 containerd[1834]: 2025-11-01 01:02:20.903 [INFO][5878] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a2664faa21 ContainerID="8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-6gf7n" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:20.912348 containerd[1834]: 2025-11-01 01:02:20.905 [INFO][5878] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-6gf7n" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:20.912348 containerd[1834]: 2025-11-01 01:02:20.905 [INFO][5878] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-6gf7n" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0", GenerateName:"calico-apiserver-65b64f6597-", Namespace:"calico-apiserver", SelfLink:"", UID:"2848d47a-0e6d-4163-bcbd-cf745e94e4c6", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b64f6597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f", Pod:"calico-apiserver-65b64f6597-6gf7n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a2664faa21", MAC:"56:75:44:30:a9:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:20.912348 containerd[1834]: 2025-11-01 01:02:20.910 [INFO][5878] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f" Namespace="calico-apiserver" Pod="calico-apiserver-65b64f6597-6gf7n" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:20.920491 containerd[1834]: time="2025-11-01T01:02:20.920235753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:02:20.920558 containerd[1834]: time="2025-11-01T01:02:20.920498198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:02:20.920558 containerd[1834]: time="2025-11-01T01:02:20.920525785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:20.920592 containerd[1834]: time="2025-11-01T01:02:20.920565560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:20.946490 systemd[1]: Started cri-containerd-8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f.scope - libcontainer container 8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f. Nov 1 01:02:20.973039 containerd[1834]: time="2025-11-01T01:02:20.972992031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65b64f6597-6gf7n,Uid:2848d47a-0e6d-4163-bcbd-cf745e94e4c6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f\"" Nov 1 01:02:20.973830 containerd[1834]: time="2025-11-01T01:02:20.973815929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:02:20.993525 systemd-networkd[1507]: calic41e6d2f214: Gained IPv6LL Nov 1 01:02:21.121488 systemd-networkd[1507]: cali7ff2b71a6d5: Gained IPv6LL Nov 1 01:02:21.357944 containerd[1834]: time="2025-11-01T01:02:21.357715079Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:21.358651 containerd[1834]: time="2025-11-01T01:02:21.358624248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:02:21.358715 containerd[1834]: time="2025-11-01T01:02:21.358697662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:02:21.358880 kubelet[3094]: E1101 01:02:21.358846 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:21.358914 kubelet[3094]: E1101 01:02:21.358889 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:21.358992 kubelet[3094]: E1101 01:02:21.358969 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfxv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b64f6597-6gf7n_calico-apiserver(2848d47a-0e6d-4163-bcbd-cf745e94e4c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:21.360147 kubelet[3094]: E1101 01:02:21.360130 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:02:21.770371 containerd[1834]: time="2025-11-01T01:02:21.770286224Z" level=info msg="StopPodSandbox for \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\"" Nov 1 01:02:21.822311 containerd[1834]: 2025-11-01 01:02:21.805 [INFO][5980] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Nov 1 01:02:21.822311 containerd[1834]: 2025-11-01 01:02:21.805 [INFO][5980] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" iface="eth0" netns="/var/run/netns/cni-b964db7a-bfa8-17f9-a2be-f578fc2872df" Nov 1 01:02:21.822311 containerd[1834]: 2025-11-01 01:02:21.805 [INFO][5980] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" iface="eth0" netns="/var/run/netns/cni-b964db7a-bfa8-17f9-a2be-f578fc2872df" Nov 1 01:02:21.822311 containerd[1834]: 2025-11-01 01:02:21.805 [INFO][5980] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" iface="eth0" netns="/var/run/netns/cni-b964db7a-bfa8-17f9-a2be-f578fc2872df" Nov 1 01:02:21.822311 containerd[1834]: 2025-11-01 01:02:21.805 [INFO][5980] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Nov 1 01:02:21.822311 containerd[1834]: 2025-11-01 01:02:21.805 [INFO][5980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Nov 1 01:02:21.822311 containerd[1834]: 2025-11-01 01:02:21.815 [INFO][6003] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" HandleID="k8s-pod-network.122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:21.822311 containerd[1834]: 2025-11-01 01:02:21.815 [INFO][6003] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:21.822311 containerd[1834]: 2025-11-01 01:02:21.816 [INFO][6003] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:21.822311 containerd[1834]: 2025-11-01 01:02:21.819 [WARNING][6003] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" HandleID="k8s-pod-network.122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:21.822311 containerd[1834]: 2025-11-01 01:02:21.819 [INFO][6003] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" HandleID="k8s-pod-network.122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:21.822311 containerd[1834]: 2025-11-01 01:02:21.820 [INFO][6003] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:21.822311 containerd[1834]: 2025-11-01 01:02:21.821 [INFO][5980] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Nov 1 01:02:21.822589 containerd[1834]: time="2025-11-01T01:02:21.822384349Z" level=info msg="TearDown network for sandbox \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\" successfully" Nov 1 01:02:21.822589 containerd[1834]: time="2025-11-01T01:02:21.822400576Z" level=info msg="StopPodSandbox for \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\" returns successfully" Nov 1 01:02:21.822844 containerd[1834]: time="2025-11-01T01:02:21.822805661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mzcbx,Uid:4655cfaf-6ee0-4366-982d-ab89e39053ab,Namespace:kube-system,Attempt:1,}" Nov 1 01:02:21.824425 systemd[1]: run-netns-cni\x2db964db7a\x2dbfa8\x2d17f9\x2da2be\x2df578fc2872df.mount: Deactivated successfully. Nov 1 01:02:21.874489 systemd-networkd[1507]: calia21d69f4c6c: Link UP Nov 1 01:02:21.874635 systemd-networkd[1507]: calia21d69f4c6c: Gained carrier Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.844 [INFO][6017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0 coredns-674b8bbfcf- kube-system 4655cfaf-6ee0-4366-982d-ab89e39053ab 998 0 2025-11-01 01:01:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-13ad226fb7 coredns-674b8bbfcf-mzcbx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia21d69f4c6c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" Namespace="kube-system" Pod="coredns-674b8bbfcf-mzcbx" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-" Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.844 [INFO][6017] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" Namespace="kube-system" Pod="coredns-674b8bbfcf-mzcbx" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.856 [INFO][6039] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" HandleID="k8s-pod-network.3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.856 [INFO][6039] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" HandleID="k8s-pod-network.3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005a1520), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-13ad226fb7", "pod":"coredns-674b8bbfcf-mzcbx", "timestamp":"2025-11-01 01:02:21.856768375 +0000 UTC"}, Hostname:"ci-4081.3.6-n-13ad226fb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.856 [INFO][6039] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.856 [INFO][6039] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.856 [INFO][6039] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-13ad226fb7' Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.860 [INFO][6039] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.863 [INFO][6039] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.865 [INFO][6039] ipam/ipam.go 511: Trying affinity for 192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.865 [INFO][6039] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.866 [INFO][6039] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.192/26 host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.866 [INFO][6039] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.192/26 handle="k8s-pod-network.3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.867 [INFO][6039] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50 Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.869 [INFO][6039] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.192/26 handle="k8s-pod-network.3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.872 [INFO][6039] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.200/26] block=192.168.1.192/26 handle="k8s-pod-network.3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.872 [INFO][6039] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.200/26] handle="k8s-pod-network.3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" host="ci-4081.3.6-n-13ad226fb7" Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.872 [INFO][6039] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:21.880041 containerd[1834]: 2025-11-01 01:02:21.872 [INFO][6039] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.200/26] IPv6=[] ContainerID="3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" HandleID="k8s-pod-network.3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:21.880598 containerd[1834]: 2025-11-01 01:02:21.873 [INFO][6017] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" Namespace="kube-system" Pod="coredns-674b8bbfcf-mzcbx" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4655cfaf-6ee0-4366-982d-ab89e39053ab", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"", Pod:"coredns-674b8bbfcf-mzcbx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia21d69f4c6c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:21.880598 containerd[1834]: 2025-11-01 01:02:21.873 [INFO][6017] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.200/32] ContainerID="3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" Namespace="kube-system" Pod="coredns-674b8bbfcf-mzcbx" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:21.880598 containerd[1834]: 2025-11-01 01:02:21.873 [INFO][6017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia21d69f4c6c ContainerID="3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" Namespace="kube-system" Pod="coredns-674b8bbfcf-mzcbx" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:21.880598 containerd[1834]: 2025-11-01 01:02:21.874 [INFO][6017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" Namespace="kube-system" Pod="coredns-674b8bbfcf-mzcbx" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:21.880598 containerd[1834]: 2025-11-01 01:02:21.874 [INFO][6017] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" Namespace="kube-system" Pod="coredns-674b8bbfcf-mzcbx" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4655cfaf-6ee0-4366-982d-ab89e39053ab", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50", Pod:"coredns-674b8bbfcf-mzcbx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia21d69f4c6c", MAC:"ca:f4:56:e9:d9:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:21.880598 containerd[1834]: 2025-11-01 01:02:21.879 [INFO][6017] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50" Namespace="kube-system" Pod="coredns-674b8bbfcf-mzcbx" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:21.888215 containerd[1834]: time="2025-11-01T01:02:21.888163937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:02:21.888407 containerd[1834]: time="2025-11-01T01:02:21.888387474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:02:21.888438 containerd[1834]: time="2025-11-01T01:02:21.888404489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:21.888466 containerd[1834]: time="2025-11-01T01:02:21.888451296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:02:21.902247 kubelet[3094]: E1101 01:02:21.902225 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:02:21.902247 kubelet[3094]: E1101 01:02:21.902226 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:02:21.908357 systemd[1]: Started cri-containerd-3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50.scope - libcontainer container 3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50. Nov 1 01:02:21.930714 containerd[1834]: time="2025-11-01T01:02:21.930663439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mzcbx,Uid:4655cfaf-6ee0-4366-982d-ab89e39053ab,Namespace:kube-system,Attempt:1,} returns sandbox id \"3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50\"" Nov 1 01:02:21.932553 containerd[1834]: time="2025-11-01T01:02:21.932539115Z" level=info msg="CreateContainer within sandbox \"3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:02:21.937024 containerd[1834]: time="2025-11-01T01:02:21.936977901Z" level=info msg="CreateContainer within sandbox \"3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5fe20a1dc2ef7b0aad4a1dab689605ee8bf4122163d37940ab6f8cd92b3a4359\"" Nov 1 01:02:21.937205 containerd[1834]: time="2025-11-01T01:02:21.937191126Z" level=info msg="StartContainer for \"5fe20a1dc2ef7b0aad4a1dab689605ee8bf4122163d37940ab6f8cd92b3a4359\"" Nov 1 01:02:21.961727 systemd[1]: Started cri-containerd-5fe20a1dc2ef7b0aad4a1dab689605ee8bf4122163d37940ab6f8cd92b3a4359.scope - libcontainer container 5fe20a1dc2ef7b0aad4a1dab689605ee8bf4122163d37940ab6f8cd92b3a4359. Nov 1 01:02:22.015031 containerd[1834]: time="2025-11-01T01:02:22.014987905Z" level=info msg="StartContainer for \"5fe20a1dc2ef7b0aad4a1dab689605ee8bf4122163d37940ab6f8cd92b3a4359\" returns successfully" Nov 1 01:02:22.209562 systemd-networkd[1507]: cali8a2664faa21: Gained IPv6LL Nov 1 01:02:22.905008 kubelet[3094]: E1101 01:02:22.904973 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:02:22.920663 kubelet[3094]: I1101 01:02:22.920573 3094 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mzcbx" podStartSLOduration=36.920553517 podStartE2EDuration="36.920553517s" podCreationTimestamp="2025-11-01 01:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:02:22.920261973 +0000 UTC m=+42.200398773" watchObservedRunningTime="2025-11-01 01:02:22.920553517 +0000 UTC m=+42.200690310" Nov 1 01:02:23.169519 systemd-networkd[1507]: calia21d69f4c6c: Gained IPv6LL Nov 1 01:02:24.771935 containerd[1834]: time="2025-11-01T01:02:24.771818420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:02:25.150533 containerd[1834]: time="2025-11-01T01:02:25.150285444Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:25.151246 containerd[1834]: time="2025-11-01T01:02:25.151141870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:02:25.151337 containerd[1834]: time="2025-11-01T01:02:25.151208900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:02:25.151369 kubelet[3094]: E1101 01:02:25.151348 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:02:25.151528 kubelet[3094]: E1101 01:02:25.151377 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:02:25.151528 kubelet[3094]: E1101 01:02:25.151451 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0af9fc2fd2874665a6a7d3eedbb1bf4a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc55k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57bf865dbd-dvkh4_calico-system(53044ffa-faec-4388-b8a9-277f38bf6718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:25.153206 containerd[1834]: time="2025-11-01T01:02:25.153194898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:02:25.525271 containerd[1834]: time="2025-11-01T01:02:25.525100165Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:25.526128 containerd[1834]: time="2025-11-01T01:02:25.526103173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:02:25.526189 containerd[1834]: time="2025-11-01T01:02:25.526173419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:02:25.526348 kubelet[3094]: E1101 01:02:25.526298 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:02:25.526348 kubelet[3094]: E1101 01:02:25.526330 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:02:25.526418 kubelet[3094]: E1101 01:02:25.526398 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc55k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57bf865dbd-dvkh4_calico-system(53044ffa-faec-4388-b8a9-277f38bf6718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:25.528147 kubelet[3094]: E1101 01:02:25.528128 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:02:27.955151 kubelet[3094]: I1101 01:02:27.955069 3094 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 01:02:32.769179 containerd[1834]: time="2025-11-01T01:02:32.769127939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:02:33.144625 containerd[1834]: time="2025-11-01T01:02:33.144340107Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:33.145290 containerd[1834]: time="2025-11-01T01:02:33.145172623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:02:33.145290 containerd[1834]: time="2025-11-01T01:02:33.145261875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:02:33.145398 kubelet[3094]: E1101 01:02:33.145375 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:02:33.145593 kubelet[3094]: E1101 01:02:33.145409 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:02:33.145593 kubelet[3094]: E1101 01:02:33.145492 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqkcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4gtnc_calico-system(f10f5e42-eb9a-47ab-8781-8e9dfee85efa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:33.146628 kubelet[3094]: E1101 01:02:33.146612 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:02:34.771386 containerd[1834]: time="2025-11-01T01:02:34.771264663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:02:35.141022 containerd[1834]: time="2025-11-01T01:02:35.140827214Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:35.141561 containerd[1834]: time="2025-11-01T01:02:35.141530265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:02:35.141636 containerd[1834]: time="2025-11-01T01:02:35.141591638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:02:35.141751 kubelet[3094]: E1101 01:02:35.141720 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:02:35.141991 kubelet[3094]: E1101 01:02:35.141759 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:02:35.141991 kubelet[3094]: E1101 01:02:35.141897 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzr2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vnfp_calico-system(1c2067e6-df38-44d0-9df8-192be51b26fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:35.142150 containerd[1834]: time="2025-11-01T01:02:35.141980717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:02:35.479856 containerd[1834]: time="2025-11-01T01:02:35.479803303Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:35.480217 containerd[1834]: time="2025-11-01T01:02:35.480196362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:02:35.480289 containerd[1834]: time="2025-11-01T01:02:35.480245066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:02:35.480402 kubelet[3094]: E1101 01:02:35.480351 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:02:35.480402 kubelet[3094]: E1101 01:02:35.480381 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:02:35.480609 kubelet[3094]: E1101 01:02:35.480572 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pvl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59cbdf9dd7-589d7_calico-system(66631db9-6f47-4e8c-8fde-e00b56c3ece6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:35.480688 containerd[1834]: time="2025-11-01T01:02:35.480600582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:02:35.481705 kubelet[3094]: E1101 01:02:35.481687 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:02:35.834349 containerd[1834]: time="2025-11-01T01:02:35.834089584Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:35.834920 containerd[1834]: time="2025-11-01T01:02:35.834896186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:02:35.834987 containerd[1834]: time="2025-11-01T01:02:35.834965208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:02:35.835085 kubelet[3094]: E1101 01:02:35.835064 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:35.835112 kubelet[3094]: E1101 01:02:35.835094 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:35.835262 kubelet[3094]: E1101 01:02:35.835237 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9628,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b64f6597-jkbn2_calico-apiserver(61ef21f2-b413-4ed0-8572-35cbc407e679): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:35.835348 containerd[1834]: time="2025-11-01T01:02:35.835294426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:02:35.836431 kubelet[3094]: E1101 01:02:35.836400 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:02:36.179827 containerd[1834]: time="2025-11-01T01:02:36.179569012Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:36.180427 containerd[1834]: time="2025-11-01T01:02:36.180398587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:02:36.180494 containerd[1834]: time="2025-11-01T01:02:36.180465094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:02:36.180570 kubelet[3094]: E1101 01:02:36.180551 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:02:36.180733 kubelet[3094]: E1101 01:02:36.180580 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:02:36.180733 kubelet[3094]: E1101 01:02:36.180651 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzr2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vnfp_calico-system(1c2067e6-df38-44d0-9df8-192be51b26fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:36.181725 kubelet[3094]: E1101 01:02:36.181709 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:02:36.771665 containerd[1834]: time="2025-11-01T01:02:36.771591462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:02:36.772188 kubelet[3094]: E1101 01:02:36.772084 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:02:37.143230 containerd[1834]: time="2025-11-01T01:02:37.143152632Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:37.143549 containerd[1834]: time="2025-11-01T01:02:37.143525842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:02:37.143595 containerd[1834]: time="2025-11-01T01:02:37.143571116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:02:37.143709 kubelet[3094]: E1101 01:02:37.143684 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:37.143738 kubelet[3094]: E1101 01:02:37.143720 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:02:37.143831 kubelet[3094]: E1101 01:02:37.143810 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfxv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b64f6597-6gf7n_calico-apiserver(2848d47a-0e6d-4163-bcbd-cf745e94e4c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:37.144961 kubelet[3094]: E1101 01:02:37.144944 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:02:40.764184 containerd[1834]: time="2025-11-01T01:02:40.764097805Z" level=info msg="StopPodSandbox for \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\"" Nov 1 01:02:40.844852 containerd[1834]: 2025-11-01 01:02:40.816 [WARNING][6247] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0", GenerateName:"calico-kube-controllers-59cbdf9dd7-", Namespace:"calico-system", SelfLink:"", UID:"66631db9-6f47-4e8c-8fde-e00b56c3ece6", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59cbdf9dd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f", Pod:"calico-kube-controllers-59cbdf9dd7-589d7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7ff2b71a6d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:40.844852 containerd[1834]: 2025-11-01 01:02:40.816 [INFO][6247] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Nov 1 01:02:40.844852 containerd[1834]: 2025-11-01 01:02:40.816 [INFO][6247] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" iface="eth0" netns="" Nov 1 01:02:40.844852 containerd[1834]: 2025-11-01 01:02:40.816 [INFO][6247] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Nov 1 01:02:40.844852 containerd[1834]: 2025-11-01 01:02:40.816 [INFO][6247] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Nov 1 01:02:40.844852 containerd[1834]: 2025-11-01 01:02:40.828 [INFO][6262] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" HandleID="k8s-pod-network.7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:40.844852 containerd[1834]: 2025-11-01 01:02:40.829 [INFO][6262] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:40.844852 containerd[1834]: 2025-11-01 01:02:40.829 [INFO][6262] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:40.844852 containerd[1834]: 2025-11-01 01:02:40.842 [WARNING][6262] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" HandleID="k8s-pod-network.7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:40.844852 containerd[1834]: 2025-11-01 01:02:40.842 [INFO][6262] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" HandleID="k8s-pod-network.7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:40.844852 containerd[1834]: 2025-11-01 01:02:40.843 [INFO][6262] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:40.844852 containerd[1834]: 2025-11-01 01:02:40.844 [INFO][6247] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Nov 1 01:02:40.845245 containerd[1834]: time="2025-11-01T01:02:40.844854233Z" level=info msg="TearDown network for sandbox \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\" successfully" Nov 1 01:02:40.845245 containerd[1834]: time="2025-11-01T01:02:40.844869979Z" level=info msg="StopPodSandbox for \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\" returns successfully" Nov 1 01:02:40.845245 containerd[1834]: time="2025-11-01T01:02:40.845235482Z" level=info msg="RemovePodSandbox for \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\"" Nov 1 01:02:40.845293 containerd[1834]: time="2025-11-01T01:02:40.845253876Z" level=info msg="Forcibly stopping sandbox \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\"" Nov 1 01:02:40.879244 containerd[1834]: 2025-11-01 01:02:40.861 [WARNING][6284] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0", GenerateName:"calico-kube-controllers-59cbdf9dd7-", Namespace:"calico-system", SelfLink:"", UID:"66631db9-6f47-4e8c-8fde-e00b56c3ece6", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59cbdf9dd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"749e55ff21e91719ecf018bc7603fbace567e7c73d5563e736017d4a291aea7f", Pod:"calico-kube-controllers-59cbdf9dd7-589d7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7ff2b71a6d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:40.879244 containerd[1834]: 2025-11-01 01:02:40.861 [INFO][6284] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Nov 1 01:02:40.879244 containerd[1834]: 2025-11-01 01:02:40.861 [INFO][6284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" iface="eth0" netns="" Nov 1 01:02:40.879244 containerd[1834]: 2025-11-01 01:02:40.861 [INFO][6284] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Nov 1 01:02:40.879244 containerd[1834]: 2025-11-01 01:02:40.861 [INFO][6284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Nov 1 01:02:40.879244 containerd[1834]: 2025-11-01 01:02:40.872 [INFO][6303] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" HandleID="k8s-pod-network.7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:40.879244 containerd[1834]: 2025-11-01 01:02:40.872 [INFO][6303] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:40.879244 containerd[1834]: 2025-11-01 01:02:40.872 [INFO][6303] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:40.879244 containerd[1834]: 2025-11-01 01:02:40.876 [WARNING][6303] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" HandleID="k8s-pod-network.7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:40.879244 containerd[1834]: 2025-11-01 01:02:40.876 [INFO][6303] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" HandleID="k8s-pod-network.7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--kube--controllers--59cbdf9dd7--589d7-eth0" Nov 1 01:02:40.879244 containerd[1834]: 2025-11-01 01:02:40.877 [INFO][6303] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:40.879244 containerd[1834]: 2025-11-01 01:02:40.878 [INFO][6284] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8" Nov 1 01:02:40.879528 containerd[1834]: time="2025-11-01T01:02:40.879242708Z" level=info msg="TearDown network for sandbox \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\" successfully" Nov 1 01:02:40.881494 containerd[1834]: time="2025-11-01T01:02:40.881479978Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:02:40.881532 containerd[1834]: time="2025-11-01T01:02:40.881507678Z" level=info msg="RemovePodSandbox \"7757ae76e3ff5a07cba9d6e8f4fc97dc0830f2be44ec76606d60bd68877488b8\" returns successfully" Nov 1 01:02:40.881842 containerd[1834]: time="2025-11-01T01:02:40.881830692Z" level=info msg="StopPodSandbox for \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\"" Nov 1 01:02:40.916906 containerd[1834]: 2025-11-01 01:02:40.899 [WARNING][6327] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1c2067e6-df38-44d0-9df8-192be51b26fc", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f", Pod:"csi-node-driver-9vnfp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic41e6d2f214", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:40.916906 containerd[1834]: 2025-11-01 01:02:40.899 [INFO][6327] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Nov 1 01:02:40.916906 containerd[1834]: 2025-11-01 01:02:40.899 [INFO][6327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" iface="eth0" netns="" Nov 1 01:02:40.916906 containerd[1834]: 2025-11-01 01:02:40.899 [INFO][6327] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Nov 1 01:02:40.916906 containerd[1834]: 2025-11-01 01:02:40.899 [INFO][6327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Nov 1 01:02:40.916906 containerd[1834]: 2025-11-01 01:02:40.909 [INFO][6346] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" HandleID="k8s-pod-network.4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Workload="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:40.916906 containerd[1834]: 2025-11-01 01:02:40.909 [INFO][6346] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:40.916906 containerd[1834]: 2025-11-01 01:02:40.909 [INFO][6346] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:40.916906 containerd[1834]: 2025-11-01 01:02:40.914 [WARNING][6346] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" HandleID="k8s-pod-network.4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Workload="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:40.916906 containerd[1834]: 2025-11-01 01:02:40.914 [INFO][6346] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" HandleID="k8s-pod-network.4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Workload="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:40.916906 containerd[1834]: 2025-11-01 01:02:40.915 [INFO][6346] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:40.916906 containerd[1834]: 2025-11-01 01:02:40.916 [INFO][6327] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Nov 1 01:02:40.917196 containerd[1834]: time="2025-11-01T01:02:40.916925886Z" level=info msg="TearDown network for sandbox \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\" successfully" Nov 1 01:02:40.917196 containerd[1834]: time="2025-11-01T01:02:40.916941122Z" level=info msg="StopPodSandbox for \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\" returns successfully" Nov 1 01:02:40.917196 containerd[1834]: time="2025-11-01T01:02:40.917100001Z" level=info msg="RemovePodSandbox for \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\"" Nov 1 01:02:40.917196 containerd[1834]: time="2025-11-01T01:02:40.917115625Z" level=info msg="Forcibly stopping sandbox \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\"" Nov 1 01:02:40.948912 containerd[1834]: 2025-11-01 01:02:40.933 [WARNING][6371] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1c2067e6-df38-44d0-9df8-192be51b26fc", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"04e38c5b97d497c13ec57100e48c0ff50c5725a4b7a1f876328d709b85d09a0f", Pod:"csi-node-driver-9vnfp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic41e6d2f214", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:40.948912 containerd[1834]: 2025-11-01 01:02:40.933 [INFO][6371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Nov 1 01:02:40.948912 containerd[1834]: 2025-11-01 01:02:40.933 [INFO][6371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" iface="eth0" netns="" Nov 1 01:02:40.948912 containerd[1834]: 2025-11-01 01:02:40.933 [INFO][6371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Nov 1 01:02:40.948912 containerd[1834]: 2025-11-01 01:02:40.933 [INFO][6371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Nov 1 01:02:40.948912 containerd[1834]: 2025-11-01 01:02:40.942 [INFO][6386] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" HandleID="k8s-pod-network.4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Workload="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:40.948912 containerd[1834]: 2025-11-01 01:02:40.942 [INFO][6386] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:40.948912 containerd[1834]: 2025-11-01 01:02:40.942 [INFO][6386] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:40.948912 containerd[1834]: 2025-11-01 01:02:40.946 [WARNING][6386] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" HandleID="k8s-pod-network.4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Workload="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:40.948912 containerd[1834]: 2025-11-01 01:02:40.946 [INFO][6386] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" HandleID="k8s-pod-network.4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Workload="ci--4081.3.6--n--13ad226fb7-k8s-csi--node--driver--9vnfp-eth0" Nov 1 01:02:40.948912 containerd[1834]: 2025-11-01 01:02:40.947 [INFO][6386] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:40.948912 containerd[1834]: 2025-11-01 01:02:40.948 [INFO][6371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6" Nov 1 01:02:40.949202 containerd[1834]: time="2025-11-01T01:02:40.948912358Z" level=info msg="TearDown network for sandbox \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\" successfully" Nov 1 01:02:40.950286 containerd[1834]: time="2025-11-01T01:02:40.950272058Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:02:40.950323 containerd[1834]: time="2025-11-01T01:02:40.950297866Z" level=info msg="RemovePodSandbox \"4de51bb03d39f151b1b96e6e5fd3fcc41a21dd23eaa306f1f1a04573daeae2d6\" returns successfully" Nov 1 01:02:40.950574 containerd[1834]: time="2025-11-01T01:02:40.950563437Z" level=info msg="StopPodSandbox for \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\"" Nov 1 01:02:40.986018 containerd[1834]: 2025-11-01 01:02:40.967 [WARNING][6410] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f10f5e42-eb9a-47ab-8781-8e9dfee85efa", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26", Pod:"goldmane-666569f655-4gtnc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.1.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic127a7d4425", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:40.986018 containerd[1834]: 2025-11-01 01:02:40.967 [INFO][6410] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Nov 1 01:02:40.986018 containerd[1834]: 2025-11-01 01:02:40.967 [INFO][6410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" iface="eth0" netns="" Nov 1 01:02:40.986018 containerd[1834]: 2025-11-01 01:02:40.967 [INFO][6410] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Nov 1 01:02:40.986018 containerd[1834]: 2025-11-01 01:02:40.967 [INFO][6410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Nov 1 01:02:40.986018 containerd[1834]: 2025-11-01 01:02:40.978 [INFO][6427] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" HandleID="k8s-pod-network.425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Workload="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:40.986018 containerd[1834]: 2025-11-01 01:02:40.978 [INFO][6427] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:40.986018 containerd[1834]: 2025-11-01 01:02:40.979 [INFO][6427] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:40.986018 containerd[1834]: 2025-11-01 01:02:40.983 [WARNING][6427] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" HandleID="k8s-pod-network.425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Workload="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:40.986018 containerd[1834]: 2025-11-01 01:02:40.983 [INFO][6427] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" HandleID="k8s-pod-network.425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Workload="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:40.986018 containerd[1834]: 2025-11-01 01:02:40.984 [INFO][6427] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:40.986018 containerd[1834]: 2025-11-01 01:02:40.985 [INFO][6410] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Nov 1 01:02:40.986419 containerd[1834]: time="2025-11-01T01:02:40.986023200Z" level=info msg="TearDown network for sandbox \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\" successfully" Nov 1 01:02:40.986419 containerd[1834]: time="2025-11-01T01:02:40.986041386Z" level=info msg="StopPodSandbox for \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\" returns successfully" Nov 1 01:02:40.986419 containerd[1834]: time="2025-11-01T01:02:40.986333728Z" level=info msg="RemovePodSandbox for \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\"" Nov 1 01:02:40.986419 containerd[1834]: time="2025-11-01T01:02:40.986357377Z" level=info msg="Forcibly stopping sandbox \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\"" Nov 1 01:02:41.033502 containerd[1834]: 2025-11-01 01:02:41.009 [WARNING][6451] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f10f5e42-eb9a-47ab-8781-8e9dfee85efa", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"0ff6470ea1f6b132ff7d77dc136970ccdabd1e0c233920ff68d8db3d28782c26", Pod:"goldmane-666569f655-4gtnc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.1.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic127a7d4425", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:41.033502 containerd[1834]: 2025-11-01 01:02:41.010 [INFO][6451] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Nov 1 01:02:41.033502 containerd[1834]: 2025-11-01 01:02:41.010 [INFO][6451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" iface="eth0" netns="" Nov 1 01:02:41.033502 containerd[1834]: 2025-11-01 01:02:41.010 [INFO][6451] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Nov 1 01:02:41.033502 containerd[1834]: 2025-11-01 01:02:41.010 [INFO][6451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Nov 1 01:02:41.033502 containerd[1834]: 2025-11-01 01:02:41.024 [INFO][6466] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" HandleID="k8s-pod-network.425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Workload="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:41.033502 containerd[1834]: 2025-11-01 01:02:41.024 [INFO][6466] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:41.033502 containerd[1834]: 2025-11-01 01:02:41.024 [INFO][6466] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:41.033502 containerd[1834]: 2025-11-01 01:02:41.030 [WARNING][6466] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" HandleID="k8s-pod-network.425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Workload="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:41.033502 containerd[1834]: 2025-11-01 01:02:41.030 [INFO][6466] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" HandleID="k8s-pod-network.425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Workload="ci--4081.3.6--n--13ad226fb7-k8s-goldmane--666569f655--4gtnc-eth0" Nov 1 01:02:41.033502 containerd[1834]: 2025-11-01 01:02:41.031 [INFO][6466] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:41.033502 containerd[1834]: 2025-11-01 01:02:41.032 [INFO][6451] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26" Nov 1 01:02:41.033502 containerd[1834]: time="2025-11-01T01:02:41.033486274Z" level=info msg="TearDown network for sandbox \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\" successfully" Nov 1 01:02:41.035313 containerd[1834]: time="2025-11-01T01:02:41.035299616Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:02:41.035359 containerd[1834]: time="2025-11-01T01:02:41.035328536Z" level=info msg="RemovePodSandbox \"425112c7fb1539b5b656d67a96ff486fedde7f0414008070b9b0f79be9abdf26\" returns successfully" Nov 1 01:02:41.035625 containerd[1834]: time="2025-11-01T01:02:41.035614266Z" level=info msg="StopPodSandbox for \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\"" Nov 1 01:02:41.069712 containerd[1834]: 2025-11-01 01:02:41.053 [WARNING][6494] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0", GenerateName:"calico-apiserver-65b64f6597-", Namespace:"calico-apiserver", SelfLink:"", UID:"2848d47a-0e6d-4163-bcbd-cf745e94e4c6", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b64f6597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f", Pod:"calico-apiserver-65b64f6597-6gf7n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a2664faa21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:41.069712 containerd[1834]: 2025-11-01 01:02:41.053 [INFO][6494] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Nov 1 01:02:41.069712 containerd[1834]: 2025-11-01 01:02:41.053 [INFO][6494] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" iface="eth0" netns="" Nov 1 01:02:41.069712 containerd[1834]: 2025-11-01 01:02:41.053 [INFO][6494] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Nov 1 01:02:41.069712 containerd[1834]: 2025-11-01 01:02:41.053 [INFO][6494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Nov 1 01:02:41.069712 containerd[1834]: 2025-11-01 01:02:41.063 [INFO][6515] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" HandleID="k8s-pod-network.9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:41.069712 containerd[1834]: 2025-11-01 01:02:41.063 [INFO][6515] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:41.069712 containerd[1834]: 2025-11-01 01:02:41.063 [INFO][6515] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:41.069712 containerd[1834]: 2025-11-01 01:02:41.067 [WARNING][6515] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" HandleID="k8s-pod-network.9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:41.069712 containerd[1834]: 2025-11-01 01:02:41.067 [INFO][6515] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" HandleID="k8s-pod-network.9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:41.069712 containerd[1834]: 2025-11-01 01:02:41.068 [INFO][6515] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:41.069712 containerd[1834]: 2025-11-01 01:02:41.068 [INFO][6494] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Nov 1 01:02:41.070007 containerd[1834]: time="2025-11-01T01:02:41.069734724Z" level=info msg="TearDown network for sandbox \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\" successfully" Nov 1 01:02:41.070007 containerd[1834]: time="2025-11-01T01:02:41.069750301Z" level=info msg="StopPodSandbox for \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\" returns successfully" Nov 1 01:02:41.070040 containerd[1834]: time="2025-11-01T01:02:41.070029368Z" level=info msg="RemovePodSandbox for \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\"" Nov 1 01:02:41.070062 containerd[1834]: time="2025-11-01T01:02:41.070047712Z" level=info msg="Forcibly stopping sandbox \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\"" Nov 1 01:02:41.104551 containerd[1834]: 2025-11-01 01:02:41.087 [WARNING][6538] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0", GenerateName:"calico-apiserver-65b64f6597-", Namespace:"calico-apiserver", SelfLink:"", UID:"2848d47a-0e6d-4163-bcbd-cf745e94e4c6", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b64f6597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"8b0ce8743b62cf8313b36e883174b8d14bb2c031035ba1ffd199692b16876c9f", Pod:"calico-apiserver-65b64f6597-6gf7n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a2664faa21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:41.104551 containerd[1834]: 2025-11-01 01:02:41.087 [INFO][6538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Nov 1 01:02:41.104551 containerd[1834]: 2025-11-01 01:02:41.087 [INFO][6538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" iface="eth0" netns="" Nov 1 01:02:41.104551 containerd[1834]: 2025-11-01 01:02:41.087 [INFO][6538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Nov 1 01:02:41.104551 containerd[1834]: 2025-11-01 01:02:41.087 [INFO][6538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Nov 1 01:02:41.104551 containerd[1834]: 2025-11-01 01:02:41.097 [INFO][6553] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" HandleID="k8s-pod-network.9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:41.104551 containerd[1834]: 2025-11-01 01:02:41.097 [INFO][6553] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:41.104551 containerd[1834]: 2025-11-01 01:02:41.097 [INFO][6553] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:41.104551 containerd[1834]: 2025-11-01 01:02:41.101 [WARNING][6553] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" HandleID="k8s-pod-network.9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:41.104551 containerd[1834]: 2025-11-01 01:02:41.101 [INFO][6553] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" HandleID="k8s-pod-network.9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--6gf7n-eth0" Nov 1 01:02:41.104551 containerd[1834]: 2025-11-01 01:02:41.103 [INFO][6553] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:41.104551 containerd[1834]: 2025-11-01 01:02:41.103 [INFO][6538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a" Nov 1 01:02:41.104551 containerd[1834]: time="2025-11-01T01:02:41.104543454Z" level=info msg="TearDown network for sandbox \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\" successfully" Nov 1 01:02:41.106086 containerd[1834]: time="2025-11-01T01:02:41.106071545Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:02:41.106117 containerd[1834]: time="2025-11-01T01:02:41.106095849Z" level=info msg="RemovePodSandbox \"9760525df7088d25d157a016397d0a2319df83b3b01676dfbbb82ebaa369694a\" returns successfully" Nov 1 01:02:41.106412 containerd[1834]: time="2025-11-01T01:02:41.106370453Z" level=info msg="StopPodSandbox for \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\"" Nov 1 01:02:41.140652 containerd[1834]: 2025-11-01 01:02:41.123 [WARNING][6579] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-whisker--7596bf7846--q9gz4-eth0" Nov 1 01:02:41.140652 containerd[1834]: 2025-11-01 01:02:41.123 [INFO][6579] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Nov 1 01:02:41.140652 containerd[1834]: 2025-11-01 01:02:41.123 [INFO][6579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" iface="eth0" netns="" Nov 1 01:02:41.140652 containerd[1834]: 2025-11-01 01:02:41.123 [INFO][6579] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Nov 1 01:02:41.140652 containerd[1834]: 2025-11-01 01:02:41.123 [INFO][6579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Nov 1 01:02:41.140652 containerd[1834]: 2025-11-01 01:02:41.134 [INFO][6597] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" HandleID="k8s-pod-network.14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-whisker--7596bf7846--q9gz4-eth0" Nov 1 01:02:41.140652 containerd[1834]: 2025-11-01 01:02:41.134 [INFO][6597] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:41.140652 containerd[1834]: 2025-11-01 01:02:41.134 [INFO][6597] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:41.140652 containerd[1834]: 2025-11-01 01:02:41.138 [WARNING][6597] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" HandleID="k8s-pod-network.14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-whisker--7596bf7846--q9gz4-eth0" Nov 1 01:02:41.140652 containerd[1834]: 2025-11-01 01:02:41.138 [INFO][6597] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" HandleID="k8s-pod-network.14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-whisker--7596bf7846--q9gz4-eth0" Nov 1 01:02:41.140652 containerd[1834]: 2025-11-01 01:02:41.139 [INFO][6597] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:41.140652 containerd[1834]: 2025-11-01 01:02:41.139 [INFO][6579] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Nov 1 01:02:41.140963 containerd[1834]: time="2025-11-01T01:02:41.140679318Z" level=info msg="TearDown network for sandbox \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\" successfully" Nov 1 01:02:41.140963 containerd[1834]: time="2025-11-01T01:02:41.140698769Z" level=info msg="StopPodSandbox for \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\" returns successfully" Nov 1 01:02:41.141031 containerd[1834]: time="2025-11-01T01:02:41.141018223Z" level=info msg="RemovePodSandbox for \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\"" Nov 1 01:02:41.141061 containerd[1834]: time="2025-11-01T01:02:41.141038061Z" level=info msg="Forcibly stopping sandbox \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\"" Nov 1 01:02:41.175519 containerd[1834]: 2025-11-01 01:02:41.158 [WARNING][6625] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" WorkloadEndpoint="ci--4081.3.6--n--13ad226fb7-k8s-whisker--7596bf7846--q9gz4-eth0" Nov 1 01:02:41.175519 containerd[1834]: 2025-11-01 01:02:41.158 [INFO][6625] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Nov 1 01:02:41.175519 containerd[1834]: 2025-11-01 01:02:41.158 [INFO][6625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" iface="eth0" netns="" Nov 1 01:02:41.175519 containerd[1834]: 2025-11-01 01:02:41.158 [INFO][6625] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Nov 1 01:02:41.175519 containerd[1834]: 2025-11-01 01:02:41.158 [INFO][6625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Nov 1 01:02:41.175519 containerd[1834]: 2025-11-01 01:02:41.169 [INFO][6641] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" HandleID="k8s-pod-network.14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-whisker--7596bf7846--q9gz4-eth0" Nov 1 01:02:41.175519 containerd[1834]: 2025-11-01 01:02:41.169 [INFO][6641] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:41.175519 containerd[1834]: 2025-11-01 01:02:41.169 [INFO][6641] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:41.175519 containerd[1834]: 2025-11-01 01:02:41.173 [WARNING][6641] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" HandleID="k8s-pod-network.14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-whisker--7596bf7846--q9gz4-eth0" Nov 1 01:02:41.175519 containerd[1834]: 2025-11-01 01:02:41.173 [INFO][6641] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" HandleID="k8s-pod-network.14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Workload="ci--4081.3.6--n--13ad226fb7-k8s-whisker--7596bf7846--q9gz4-eth0" Nov 1 01:02:41.175519 containerd[1834]: 2025-11-01 01:02:41.174 [INFO][6641] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:41.175519 containerd[1834]: 2025-11-01 01:02:41.174 [INFO][6625] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a" Nov 1 01:02:41.175784 containerd[1834]: time="2025-11-01T01:02:41.175542590Z" level=info msg="TearDown network for sandbox \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\" successfully" Nov 1 01:02:41.180500 containerd[1834]: time="2025-11-01T01:02:41.180454545Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:02:41.180500 containerd[1834]: time="2025-11-01T01:02:41.180484523Z" level=info msg="RemovePodSandbox \"14ea27bbb01f7527fc23ecef182d61622caa6c1c1de54be0c32d7ebe1b3f982a\" returns successfully" Nov 1 01:02:41.180763 containerd[1834]: time="2025-11-01T01:02:41.180746764Z" level=info msg="StopPodSandbox for \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\"" Nov 1 01:02:41.215513 containerd[1834]: 2025-11-01 01:02:41.198 [WARNING][6664] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"204dfdb6-7331-4950-b1cc-50b35a251b49", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919", Pod:"coredns-674b8bbfcf-b6jr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51bb174985e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:41.215513 containerd[1834]: 2025-11-01 01:02:41.198 [INFO][6664] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Nov 1 01:02:41.215513 containerd[1834]: 2025-11-01 01:02:41.198 [INFO][6664] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" iface="eth0" netns="" Nov 1 01:02:41.215513 containerd[1834]: 2025-11-01 01:02:41.198 [INFO][6664] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Nov 1 01:02:41.215513 containerd[1834]: 2025-11-01 01:02:41.198 [INFO][6664] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Nov 1 01:02:41.215513 containerd[1834]: 2025-11-01 01:02:41.208 [INFO][6681] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" HandleID="k8s-pod-network.2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:41.215513 containerd[1834]: 2025-11-01 01:02:41.208 [INFO][6681] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:41.215513 containerd[1834]: 2025-11-01 01:02:41.208 [INFO][6681] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:41.215513 containerd[1834]: 2025-11-01 01:02:41.212 [WARNING][6681] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" HandleID="k8s-pod-network.2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:41.215513 containerd[1834]: 2025-11-01 01:02:41.212 [INFO][6681] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" HandleID="k8s-pod-network.2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:41.215513 containerd[1834]: 2025-11-01 01:02:41.214 [INFO][6681] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:41.215513 containerd[1834]: 2025-11-01 01:02:41.214 [INFO][6664] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Nov 1 01:02:41.215797 containerd[1834]: time="2025-11-01T01:02:41.215533288Z" level=info msg="TearDown network for sandbox \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\" successfully" Nov 1 01:02:41.215797 containerd[1834]: time="2025-11-01T01:02:41.215551981Z" level=info msg="StopPodSandbox for \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\" returns successfully" Nov 1 01:02:41.215844 containerd[1834]: time="2025-11-01T01:02:41.215830696Z" level=info msg="RemovePodSandbox for \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\"" Nov 1 01:02:41.215863 containerd[1834]: time="2025-11-01T01:02:41.215850252Z" level=info msg="Forcibly stopping sandbox \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\"" Nov 1 01:02:41.257752 containerd[1834]: 2025-11-01 01:02:41.234 [WARNING][6705] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"204dfdb6-7331-4950-b1cc-50b35a251b49", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"e56f0a6594c0811611c7c9b79bce9c76eb8af837970d432489844fe422444919", Pod:"coredns-674b8bbfcf-b6jr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51bb174985e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:41.257752 containerd[1834]: 2025-11-01 01:02:41.234 [INFO][6705] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Nov 1 01:02:41.257752 containerd[1834]: 2025-11-01 01:02:41.234 [INFO][6705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" iface="eth0" netns="" Nov 1 01:02:41.257752 containerd[1834]: 2025-11-01 01:02:41.234 [INFO][6705] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Nov 1 01:02:41.257752 containerd[1834]: 2025-11-01 01:02:41.234 [INFO][6705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Nov 1 01:02:41.257752 containerd[1834]: 2025-11-01 01:02:41.248 [INFO][6723] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" HandleID="k8s-pod-network.2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:41.257752 containerd[1834]: 2025-11-01 01:02:41.248 [INFO][6723] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:41.257752 containerd[1834]: 2025-11-01 01:02:41.248 [INFO][6723] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:41.257752 containerd[1834]: 2025-11-01 01:02:41.254 [WARNING][6723] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" HandleID="k8s-pod-network.2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:41.257752 containerd[1834]: 2025-11-01 01:02:41.254 [INFO][6723] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" HandleID="k8s-pod-network.2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--b6jr8-eth0" Nov 1 01:02:41.257752 containerd[1834]: 2025-11-01 01:02:41.255 [INFO][6723] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:41.257752 containerd[1834]: 2025-11-01 01:02:41.256 [INFO][6705] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7" Nov 1 01:02:41.258181 containerd[1834]: time="2025-11-01T01:02:41.257757308Z" level=info msg="TearDown network for sandbox \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\" successfully" Nov 1 01:02:41.259530 containerd[1834]: time="2025-11-01T01:02:41.259515191Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:02:41.259567 containerd[1834]: time="2025-11-01T01:02:41.259540102Z" level=info msg="RemovePodSandbox \"2c62c57b0b95de38fe715865b276211f71f8494385bd47abafd4ee5a0e5651a7\" returns successfully" Nov 1 01:02:41.259818 containerd[1834]: time="2025-11-01T01:02:41.259803088Z" level=info msg="StopPodSandbox for \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\"" Nov 1 01:02:41.294136 containerd[1834]: 2025-11-01 01:02:41.276 [WARNING][6751] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0", GenerateName:"calico-apiserver-65b64f6597-", Namespace:"calico-apiserver", SelfLink:"", UID:"61ef21f2-b413-4ed0-8572-35cbc407e679", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b64f6597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60", Pod:"calico-apiserver-65b64f6597-jkbn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e417b6d8fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:41.294136 containerd[1834]: 2025-11-01 01:02:41.277 [INFO][6751] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Nov 1 01:02:41.294136 containerd[1834]: 2025-11-01 01:02:41.277 [INFO][6751] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" iface="eth0" netns="" Nov 1 01:02:41.294136 containerd[1834]: 2025-11-01 01:02:41.277 [INFO][6751] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Nov 1 01:02:41.294136 containerd[1834]: 2025-11-01 01:02:41.277 [INFO][6751] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Nov 1 01:02:41.294136 containerd[1834]: 2025-11-01 01:02:41.287 [INFO][6764] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" HandleID="k8s-pod-network.8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:41.294136 containerd[1834]: 2025-11-01 01:02:41.287 [INFO][6764] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:41.294136 containerd[1834]: 2025-11-01 01:02:41.287 [INFO][6764] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:41.294136 containerd[1834]: 2025-11-01 01:02:41.291 [WARNING][6764] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" HandleID="k8s-pod-network.8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:41.294136 containerd[1834]: 2025-11-01 01:02:41.291 [INFO][6764] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" HandleID="k8s-pod-network.8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:41.294136 containerd[1834]: 2025-11-01 01:02:41.292 [INFO][6764] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:41.294136 containerd[1834]: 2025-11-01 01:02:41.293 [INFO][6751] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Nov 1 01:02:41.294136 containerd[1834]: time="2025-11-01T01:02:41.294123759Z" level=info msg="TearDown network for sandbox \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\" successfully" Nov 1 01:02:41.294463 containerd[1834]: time="2025-11-01T01:02:41.294140741Z" level=info msg="StopPodSandbox for \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\" returns successfully" Nov 1 01:02:41.294463 containerd[1834]: time="2025-11-01T01:02:41.294452932Z" level=info msg="RemovePodSandbox for \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\"" Nov 1 01:02:41.294503 containerd[1834]: time="2025-11-01T01:02:41.294474150Z" level=info msg="Forcibly stopping sandbox \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\"" Nov 1 01:02:41.375552 containerd[1834]: 2025-11-01 01:02:41.318 [WARNING][6790] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0", GenerateName:"calico-apiserver-65b64f6597-", Namespace:"calico-apiserver", SelfLink:"", UID:"61ef21f2-b413-4ed0-8572-35cbc407e679", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65b64f6597", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"30f86b4704873e4604ab76a363a7d4ec933de96b2a152e8f645112bed66ece60", Pod:"calico-apiserver-65b64f6597-jkbn2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e417b6d8fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:41.375552 containerd[1834]: 2025-11-01 01:02:41.318 [INFO][6790] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Nov 1 01:02:41.375552 containerd[1834]: 2025-11-01 01:02:41.318 [INFO][6790] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" iface="eth0" netns="" Nov 1 01:02:41.375552 containerd[1834]: 2025-11-01 01:02:41.318 [INFO][6790] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Nov 1 01:02:41.375552 containerd[1834]: 2025-11-01 01:02:41.318 [INFO][6790] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Nov 1 01:02:41.375552 containerd[1834]: 2025-11-01 01:02:41.362 [INFO][6805] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" HandleID="k8s-pod-network.8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:41.375552 containerd[1834]: 2025-11-01 01:02:41.363 [INFO][6805] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:41.375552 containerd[1834]: 2025-11-01 01:02:41.363 [INFO][6805] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:41.375552 containerd[1834]: 2025-11-01 01:02:41.371 [WARNING][6805] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" HandleID="k8s-pod-network.8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:41.375552 containerd[1834]: 2025-11-01 01:02:41.371 [INFO][6805] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" HandleID="k8s-pod-network.8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Workload="ci--4081.3.6--n--13ad226fb7-k8s-calico--apiserver--65b64f6597--jkbn2-eth0" Nov 1 01:02:41.375552 containerd[1834]: 2025-11-01 01:02:41.372 [INFO][6805] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:41.375552 containerd[1834]: 2025-11-01 01:02:41.374 [INFO][6790] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5" Nov 1 01:02:41.376143 containerd[1834]: time="2025-11-01T01:02:41.375563695Z" level=info msg="TearDown network for sandbox \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\" successfully" Nov 1 01:02:41.378367 containerd[1834]: time="2025-11-01T01:02:41.378351137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:02:41.378411 containerd[1834]: time="2025-11-01T01:02:41.378380547Z" level=info msg="RemovePodSandbox \"8565b5518b76d080e8ceeeb70c8eec9302df6f640406ec724fba6780302719c5\" returns successfully" Nov 1 01:02:41.378691 containerd[1834]: time="2025-11-01T01:02:41.378647451Z" level=info msg="StopPodSandbox for \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\"" Nov 1 01:02:41.414619 containerd[1834]: 2025-11-01 01:02:41.397 [WARNING][6833] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4655cfaf-6ee0-4366-982d-ab89e39053ab", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50", Pod:"coredns-674b8bbfcf-mzcbx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia21d69f4c6c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:41.414619 containerd[1834]: 2025-11-01 01:02:41.397 [INFO][6833] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Nov 1 01:02:41.414619 containerd[1834]: 2025-11-01 01:02:41.397 [INFO][6833] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" iface="eth0" netns="" Nov 1 01:02:41.414619 containerd[1834]: 2025-11-01 01:02:41.397 [INFO][6833] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Nov 1 01:02:41.414619 containerd[1834]: 2025-11-01 01:02:41.397 [INFO][6833] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Nov 1 01:02:41.414619 containerd[1834]: 2025-11-01 01:02:41.407 [INFO][6848] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" HandleID="k8s-pod-network.122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:41.414619 containerd[1834]: 2025-11-01 01:02:41.407 [INFO][6848] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:41.414619 containerd[1834]: 2025-11-01 01:02:41.407 [INFO][6848] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:41.414619 containerd[1834]: 2025-11-01 01:02:41.411 [WARNING][6848] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" HandleID="k8s-pod-network.122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:41.414619 containerd[1834]: 2025-11-01 01:02:41.411 [INFO][6848] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" HandleID="k8s-pod-network.122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:41.414619 containerd[1834]: 2025-11-01 01:02:41.413 [INFO][6848] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:41.414619 containerd[1834]: 2025-11-01 01:02:41.413 [INFO][6833] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Nov 1 01:02:41.414923 containerd[1834]: time="2025-11-01T01:02:41.414645180Z" level=info msg="TearDown network for sandbox \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\" successfully" Nov 1 01:02:41.414923 containerd[1834]: time="2025-11-01T01:02:41.414660933Z" level=info msg="StopPodSandbox for \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\" returns successfully" Nov 1 01:02:41.414960 containerd[1834]: time="2025-11-01T01:02:41.414943899Z" level=info msg="RemovePodSandbox for \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\"" Nov 1 01:02:41.414984 containerd[1834]: time="2025-11-01T01:02:41.414960133Z" level=info msg="Forcibly stopping sandbox \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\"" Nov 1 01:02:41.449806 containerd[1834]: 2025-11-01 01:02:41.433 [WARNING][6873] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4655cfaf-6ee0-4366-982d-ab89e39053ab", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-13ad226fb7", ContainerID:"3d06aac202b1f5e444fbc70fc7c9a053945801c1e0cec2d05fd0e855828bae50", Pod:"coredns-674b8bbfcf-mzcbx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia21d69f4c6c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:02:41.449806 containerd[1834]: 2025-11-01 01:02:41.433 [INFO][6873] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Nov 1 01:02:41.449806 containerd[1834]: 2025-11-01 01:02:41.433 [INFO][6873] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" iface="eth0" netns="" Nov 1 01:02:41.449806 containerd[1834]: 2025-11-01 01:02:41.433 [INFO][6873] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Nov 1 01:02:41.449806 containerd[1834]: 2025-11-01 01:02:41.433 [INFO][6873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Nov 1 01:02:41.449806 containerd[1834]: 2025-11-01 01:02:41.443 [INFO][6891] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" HandleID="k8s-pod-network.122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:41.449806 containerd[1834]: 2025-11-01 01:02:41.443 [INFO][6891] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:02:41.449806 containerd[1834]: 2025-11-01 01:02:41.443 [INFO][6891] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:02:41.449806 containerd[1834]: 2025-11-01 01:02:41.447 [WARNING][6891] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" HandleID="k8s-pod-network.122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:41.449806 containerd[1834]: 2025-11-01 01:02:41.447 [INFO][6891] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" HandleID="k8s-pod-network.122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Workload="ci--4081.3.6--n--13ad226fb7-k8s-coredns--674b8bbfcf--mzcbx-eth0" Nov 1 01:02:41.449806 containerd[1834]: 2025-11-01 01:02:41.448 [INFO][6891] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:02:41.449806 containerd[1834]: 2025-11-01 01:02:41.449 [INFO][6873] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01" Nov 1 01:02:41.450098 containerd[1834]: time="2025-11-01T01:02:41.449835997Z" level=info msg="TearDown network for sandbox \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\" successfully" Nov 1 01:02:41.451285 containerd[1834]: time="2025-11-01T01:02:41.451271182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:02:41.451309 containerd[1834]: time="2025-11-01T01:02:41.451296557Z" level=info msg="RemovePodSandbox \"122e931886784df469cd40373cbeafb9bf99b342a4b2b631e5a1970e1b5b2e01\" returns successfully" Nov 1 01:02:43.769003 kubelet[3094]: E1101 01:02:43.768977 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:02:46.769847 kubelet[3094]: E1101 01:02:46.769777 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:02:47.769830 containerd[1834]: time="2025-11-01T01:02:47.769797734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:02:48.129936 containerd[1834]: time="2025-11-01T01:02:48.129828024Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:48.130381 containerd[1834]: time="2025-11-01T01:02:48.130298021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:02:48.130471 containerd[1834]: time="2025-11-01T01:02:48.130401013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:02:48.130569 kubelet[3094]: E1101 01:02:48.130494 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:02:48.130569 kubelet[3094]: E1101 01:02:48.130540 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:02:48.130948 kubelet[3094]: E1101 01:02:48.130683 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0af9fc2fd2874665a6a7d3eedbb1bf4a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc55k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57bf865dbd-dvkh4_calico-system(53044ffa-faec-4388-b8a9-277f38bf6718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:48.132559 containerd[1834]: time="2025-11-01T01:02:48.132534762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:02:48.479494 containerd[1834]: time="2025-11-01T01:02:48.479329517Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:48.479829 containerd[1834]: time="2025-11-01T01:02:48.479802758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:02:48.479991 containerd[1834]: time="2025-11-01T01:02:48.479949653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:02:48.482851 kubelet[3094]: E1101 01:02:48.482334 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:02:48.482851 kubelet[3094]: E1101 01:02:48.482395 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:02:48.482851 kubelet[3094]: E1101 01:02:48.482507 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc55k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57bf865dbd-dvkh4_calico-system(53044ffa-faec-4388-b8a9-277f38bf6718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:48.483969 kubelet[3094]: E1101 01:02:48.483937 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:02:48.771246 kubelet[3094]: E1101 01:02:48.771012 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:02:48.771246 kubelet[3094]: E1101 01:02:48.771038 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:02:51.769856 kubelet[3094]: E1101 01:02:51.769793 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:02:58.771778 containerd[1834]: time="2025-11-01T01:02:58.771677169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:02:58.772656 kubelet[3094]: E1101 01:02:58.772551 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:02:59.148339 containerd[1834]: time="2025-11-01T01:02:59.148149660Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:59.149017 containerd[1834]: time="2025-11-01T01:02:59.148942153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:02:59.149052 containerd[1834]: time="2025-11-01T01:02:59.149014812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:02:59.149116 kubelet[3094]: E1101 01:02:59.149094 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:02:59.149146 kubelet[3094]: E1101 01:02:59.149126 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:02:59.149364 containerd[1834]: time="2025-11-01T01:02:59.149297563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:02:59.149406 kubelet[3094]: E1101 01:02:59.149284 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pvl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59cbdf9dd7-589d7_calico-system(66631db9-6f47-4e8c-8fde-e00b56c3ece6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:59.150428 kubelet[3094]: E1101 01:02:59.150413 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:02:59.517212 containerd[1834]: time="2025-11-01T01:02:59.517160726Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:02:59.517684 containerd[1834]: time="2025-11-01T01:02:59.517667189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:02:59.517735 containerd[1834]: time="2025-11-01T01:02:59.517711609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:02:59.517803 kubelet[3094]: E1101 01:02:59.517782 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:02:59.517837 kubelet[3094]: E1101 01:02:59.517815 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:02:59.517925 kubelet[3094]: E1101 01:02:59.517899 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqkcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4gtnc_calico-system(f10f5e42-eb9a-47ab-8781-8e9dfee85efa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:02:59.519024 kubelet[3094]: E1101 01:02:59.519009 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:02:59.769868 containerd[1834]: time="2025-11-01T01:02:59.769770509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:03:00.114693 containerd[1834]: time="2025-11-01T01:03:00.114439396Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:00.115265 containerd[1834]: time="2025-11-01T01:03:00.115171270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:03:00.115265 containerd[1834]: time="2025-11-01T01:03:00.115242805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:03:00.115334 kubelet[3094]: E1101 01:03:00.115312 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:03:00.115477 kubelet[3094]: E1101 01:03:00.115342 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:03:00.115477 kubelet[3094]: E1101 01:03:00.115430 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9628,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b64f6597-jkbn2_calico-apiserver(61ef21f2-b413-4ed0-8572-35cbc407e679): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:00.116540 kubelet[3094]: E1101 01:03:00.116526 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:03:02.769396 containerd[1834]: time="2025-11-01T01:03:02.769371823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:03:03.128003 containerd[1834]: time="2025-11-01T01:03:03.127750909Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:03.128623 containerd[1834]: time="2025-11-01T01:03:03.128553316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:03:03.128665 containerd[1834]: time="2025-11-01T01:03:03.128622349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:03:03.128761 kubelet[3094]: E1101 01:03:03.128692 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:03:03.128761 kubelet[3094]: E1101 01:03:03.128741 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:03:03.129001 kubelet[3094]: E1101 01:03:03.128866 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfxv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b64f6597-6gf7n_calico-apiserver(2848d47a-0e6d-4163-bcbd-cf745e94e4c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:03.129061 containerd[1834]: time="2025-11-01T01:03:03.128938424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:03:03.130466 kubelet[3094]: E1101 01:03:03.130434 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:03:03.490203 containerd[1834]: time="2025-11-01T01:03:03.490071540Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:03.491017 containerd[1834]: time="2025-11-01T01:03:03.490949872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:03:03.491064 containerd[1834]: time="2025-11-01T01:03:03.491013253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:03:03.491155 kubelet[3094]: E1101 01:03:03.491132 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:03:03.491187 kubelet[3094]: E1101 01:03:03.491164 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:03:03.491301 kubelet[3094]: E1101 01:03:03.491242 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzr2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vnfp_calico-system(1c2067e6-df38-44d0-9df8-192be51b26fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:03.492713 containerd[1834]: time="2025-11-01T01:03:03.492670622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:03:03.837877 containerd[1834]: time="2025-11-01T01:03:03.837637208Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:03.838486 containerd[1834]: time="2025-11-01T01:03:03.838392571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:03:03.838520 containerd[1834]: time="2025-11-01T01:03:03.838476663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:03:03.838626 kubelet[3094]: E1101 01:03:03.838548 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:03:03.838626 kubelet[3094]: E1101 01:03:03.838623 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:03:03.838797 kubelet[3094]: E1101 01:03:03.838743 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzr2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vnfp_calico-system(1c2067e6-df38-44d0-9df8-192be51b26fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:03.839985 kubelet[3094]: E1101 01:03:03.839937 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:03:10.769606 kubelet[3094]: E1101 01:03:10.769578 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:03:10.770039 kubelet[3094]: E1101 01:03:10.769840 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:03:12.771184 kubelet[3094]: E1101 01:03:12.771047 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:03:14.770506 kubelet[3094]: E1101 01:03:14.770406 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:03:17.771727 kubelet[3094]: E1101 01:03:17.771596 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:03:18.770214 kubelet[3094]: E1101 01:03:18.770181 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:03:24.769372 kubelet[3094]: E1101 01:03:24.769303 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:03:25.775930 kubelet[3094]: E1101 01:03:25.774190 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:03:25.775930 kubelet[3094]: E1101 01:03:25.775592 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:03:25.783571 kubelet[3094]: E1101 01:03:25.782180 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:03:30.773778 kubelet[3094]: E1101 01:03:30.773656 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:03:31.769326 kubelet[3094]: E1101 01:03:31.769303 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:03:36.768911 kubelet[3094]: E1101 01:03:36.768863 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:03:37.769590 containerd[1834]: time="2025-11-01T01:03:37.769527238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:03:38.139528 containerd[1834]: time="2025-11-01T01:03:38.139452931Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:38.146892 containerd[1834]: time="2025-11-01T01:03:38.146871689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:03:38.146986 containerd[1834]: time="2025-11-01T01:03:38.146919583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:03:38.147041 kubelet[3094]: E1101 01:03:38.147021 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:03:38.147286 kubelet[3094]: E1101 01:03:38.147051 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:03:38.147286 kubelet[3094]: E1101 01:03:38.147126 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0af9fc2fd2874665a6a7d3eedbb1bf4a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc55k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57bf865dbd-dvkh4_calico-system(53044ffa-faec-4388-b8a9-277f38bf6718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:38.149788 containerd[1834]: time="2025-11-01T01:03:38.149723809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:03:38.509320 containerd[1834]: time="2025-11-01T01:03:38.509172407Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:38.510194 containerd[1834]: time="2025-11-01T01:03:38.510117225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:03:38.510236 containerd[1834]: time="2025-11-01T01:03:38.510187021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:03:38.510334 kubelet[3094]: E1101 01:03:38.510285 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:03:38.510334 kubelet[3094]: E1101 01:03:38.510311 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:03:38.510398 kubelet[3094]: E1101 01:03:38.510373 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc55k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57bf865dbd-dvkh4_calico-system(53044ffa-faec-4388-b8a9-277f38bf6718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:38.511610 kubelet[3094]: E1101 01:03:38.511557 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:03:38.771404 kubelet[3094]: E1101 01:03:38.771164 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:03:39.769173 containerd[1834]: time="2025-11-01T01:03:39.769150734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:03:40.118808 containerd[1834]: time="2025-11-01T01:03:40.118540774Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:40.119615 containerd[1834]: time="2025-11-01T01:03:40.119537271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:03:40.119661 containerd[1834]: time="2025-11-01T01:03:40.119612319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:03:40.119772 kubelet[3094]: E1101 01:03:40.119720 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:03:40.119772 kubelet[3094]: E1101 01:03:40.119753 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:03:40.119966 kubelet[3094]: E1101 01:03:40.119831 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pvl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59cbdf9dd7-589d7_calico-system(66631db9-6f47-4e8c-8fde-e00b56c3ece6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:40.121155 kubelet[3094]: E1101 01:03:40.121085 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:03:41.772336 kubelet[3094]: E1101 01:03:41.772203 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:03:45.770912 containerd[1834]: time="2025-11-01T01:03:45.770794847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:03:46.109127 containerd[1834]: time="2025-11-01T01:03:46.109051859Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:46.109509 containerd[1834]: time="2025-11-01T01:03:46.109489264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:03:46.109554 containerd[1834]: time="2025-11-01T01:03:46.109516670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:03:46.109613 kubelet[3094]: E1101 01:03:46.109593 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:03:46.109863 kubelet[3094]: E1101 01:03:46.109617 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:03:46.109863 kubelet[3094]: E1101 01:03:46.109689 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfxv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b64f6597-6gf7n_calico-apiserver(2848d47a-0e6d-4163-bcbd-cf745e94e4c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:46.110849 kubelet[3094]: E1101 01:03:46.110832 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:03:48.769441 containerd[1834]: time="2025-11-01T01:03:48.769393054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:03:49.130801 containerd[1834]: time="2025-11-01T01:03:49.130703675Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:49.131310 containerd[1834]: time="2025-11-01T01:03:49.131252691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:03:49.131341 containerd[1834]: time="2025-11-01T01:03:49.131314986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:03:49.131482 kubelet[3094]: E1101 01:03:49.131429 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:03:49.131482 kubelet[3094]: E1101 01:03:49.131463 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:03:49.131672 kubelet[3094]: E1101 01:03:49.131549 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9628,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b64f6597-jkbn2_calico-apiserver(61ef21f2-b413-4ed0-8572-35cbc407e679): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:49.132723 kubelet[3094]: E1101 01:03:49.132685 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:03:52.776103 containerd[1834]: time="2025-11-01T01:03:52.776007363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:03:52.777078 kubelet[3094]: E1101 01:03:52.776518 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:03:53.141370 containerd[1834]: time="2025-11-01T01:03:53.141144035Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:53.142063 containerd[1834]: time="2025-11-01T01:03:53.141986944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:03:53.142112 containerd[1834]: time="2025-11-01T01:03:53.142060355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:03:53.142230 kubelet[3094]: E1101 01:03:53.142168 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:03:53.142296 kubelet[3094]: E1101 01:03:53.142236 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:03:53.142418 kubelet[3094]: E1101 01:03:53.142356 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqkcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4gtnc_calico-system(f10f5e42-eb9a-47ab-8781-8e9dfee85efa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:53.144099 kubelet[3094]: E1101 01:03:53.144081 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:03:53.769234 containerd[1834]: time="2025-11-01T01:03:53.769181860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:03:54.131473 containerd[1834]: time="2025-11-01T01:03:54.131384217Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:54.131888 containerd[1834]: time="2025-11-01T01:03:54.131836073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:03:54.131888 containerd[1834]: time="2025-11-01T01:03:54.131868943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:03:54.132037 kubelet[3094]: E1101 01:03:54.132016 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:03:54.132196 kubelet[3094]: E1101 01:03:54.132058 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:03:54.132196 kubelet[3094]: E1101 01:03:54.132144 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzr2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vnfp_calico-system(1c2067e6-df38-44d0-9df8-192be51b26fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:54.133740 containerd[1834]: time="2025-11-01T01:03:54.133679102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:03:54.501732 containerd[1834]: time="2025-11-01T01:03:54.501590735Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:03:54.502655 containerd[1834]: time="2025-11-01T01:03:54.502580956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:03:54.502655 containerd[1834]: time="2025-11-01T01:03:54.502636465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:03:54.502734 kubelet[3094]: E1101 01:03:54.502706 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:03:54.502766 kubelet[3094]: E1101 01:03:54.502739 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:03:54.502888 kubelet[3094]: E1101 01:03:54.502807 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzr2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vnfp_calico-system(1c2067e6-df38-44d0-9df8-192be51b26fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:03:54.504017 kubelet[3094]: E1101 01:03:54.503999 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:03:54.770932 kubelet[3094]: E1101 01:03:54.770695 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:03:58.770669 kubelet[3094]: E1101 01:03:58.770606 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:04:03.769622 kubelet[3094]: E1101 01:04:03.769512 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:04:04.770813 kubelet[3094]: E1101 01:04:04.770723 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:04:05.770760 kubelet[3094]: E1101 01:04:05.770670 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:04:06.770456 kubelet[3094]: E1101 01:04:06.770367 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:04:06.770456 kubelet[3094]: E1101 01:04:06.770383 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:04:13.771393 kubelet[3094]: E1101 01:04:13.771295 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:04:16.771268 kubelet[3094]: E1101 01:04:16.771129 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:04:17.774457 kubelet[3094]: E1101 01:04:17.774428 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:04:17.774805 kubelet[3094]: E1101 01:04:17.774540 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:04:18.771787 kubelet[3094]: E1101 01:04:18.771677 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:04:19.769784 kubelet[3094]: E1101 01:04:19.769734 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:04:25.769836 kubelet[3094]: E1101 01:04:25.769799 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:04:29.770436 kubelet[3094]: E1101 01:04:29.770336 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:04:30.769381 kubelet[3094]: E1101 01:04:30.769340 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:04:31.771183 kubelet[3094]: E1101 01:04:31.771041 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:04:32.778452 kubelet[3094]: E1101 01:04:32.778344 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:04:32.778452 kubelet[3094]: E1101 01:04:32.778343 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:04:36.769717 kubelet[3094]: E1101 01:04:36.769639 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:04:41.771150 kubelet[3094]: E1101 01:04:41.770991 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:04:42.771335 kubelet[3094]: E1101 01:04:42.771193 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:04:43.769442 kubelet[3094]: E1101 01:04:43.769380 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:04:44.776633 kubelet[3094]: E1101 01:04:44.776488 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:04:45.772403 kubelet[3094]: E1101 01:04:45.772260 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:04:49.770504 kubelet[3094]: E1101 01:04:49.770376 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:04:55.769961 kubelet[3094]: E1101 01:04:55.769870 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:04:56.770806 kubelet[3094]: E1101 01:04:56.770666 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:04:56.771800 kubelet[3094]: E1101 01:04:56.770862 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:04:58.772406 kubelet[3094]: E1101 01:04:58.772200 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:04:59.769332 containerd[1834]: time="2025-11-01T01:04:59.769276049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:05:00.152922 containerd[1834]: time="2025-11-01T01:05:00.152658731Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:05:00.153640 containerd[1834]: time="2025-11-01T01:05:00.153570181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:05:00.153696 containerd[1834]: time="2025-11-01T01:05:00.153641223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:05:00.153774 kubelet[3094]: E1101 01:05:00.153754 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:05:00.153958 kubelet[3094]: E1101 01:05:00.153785 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:05:00.153958 kubelet[3094]: E1101 01:05:00.153855 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0af9fc2fd2874665a6a7d3eedbb1bf4a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc55k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57bf865dbd-dvkh4_calico-system(53044ffa-faec-4388-b8a9-277f38bf6718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:05:00.155459 containerd[1834]: time="2025-11-01T01:05:00.155447830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:05:00.502510 containerd[1834]: time="2025-11-01T01:05:00.502375359Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:05:00.503301 containerd[1834]: time="2025-11-01T01:05:00.503275893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:05:00.503360 containerd[1834]: time="2025-11-01T01:05:00.503343500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:05:00.503503 kubelet[3094]: E1101 01:05:00.503450 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:05:00.503503 kubelet[3094]: E1101 01:05:00.503479 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:05:00.503650 kubelet[3094]: E1101 01:05:00.503560 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc55k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57bf865dbd-dvkh4_calico-system(53044ffa-faec-4388-b8a9-277f38bf6718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:05:00.504780 kubelet[3094]: E1101 01:05:00.504731 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:05:01.770457 kubelet[3094]: E1101 01:05:01.770312 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:05:06.201648 update_engine[1821]: I20251101 01:05:06.201588 1821 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 1 01:05:06.201648 update_engine[1821]: I20251101 01:05:06.201615 1821 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 1 01:05:06.201890 update_engine[1821]: I20251101 01:05:06.201718 1821 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 1 01:05:06.201976 update_engine[1821]: I20251101 01:05:06.201938 1821 omaha_request_params.cc:62] Current group set to lts Nov 1 01:05:06.202006 update_engine[1821]: I20251101 01:05:06.201996 1821 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 1 01:05:06.202006 update_engine[1821]: I20251101 01:05:06.202001 1821 update_attempter.cc:643] Scheduling an action processor start. Nov 1 01:05:06.202056 update_engine[1821]: I20251101 01:05:06.202009 1821 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 01:05:06.202056 update_engine[1821]: I20251101 01:05:06.202024 1821 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 1 01:05:06.202106 update_engine[1821]: I20251101 01:05:06.202054 1821 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 1 01:05:06.202106 update_engine[1821]: I20251101 01:05:06.202060 1821 omaha_request_action.cc:272] Request: Nov 1 01:05:06.202106 update_engine[1821]: Nov 1 01:05:06.202106 update_engine[1821]: Nov 1 01:05:06.202106 update_engine[1821]: Nov 1 01:05:06.202106 update_engine[1821]: Nov 1 01:05:06.202106 update_engine[1821]: Nov 1 01:05:06.202106 update_engine[1821]: Nov 1 01:05:06.202106 update_engine[1821]: Nov 1 01:05:06.202106 update_engine[1821]: Nov 1 01:05:06.202106 update_engine[1821]: I20251101 01:05:06.202064 1821 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:05:06.202312 locksmithd[1869]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 1 01:05:06.202887 update_engine[1821]: I20251101 01:05:06.202849 1821 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:05:06.203050 update_engine[1821]: I20251101 01:05:06.203014 1821 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 01:05:06.203718 update_engine[1821]: E20251101 01:05:06.203676 1821 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:05:06.203718 update_engine[1821]: I20251101 01:05:06.203711 1821 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 1 01:05:08.769527 kubelet[3094]: E1101 01:05:08.769481 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:05:08.770078 containerd[1834]: time="2025-11-01T01:05:08.769752920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:05:09.151853 containerd[1834]: time="2025-11-01T01:05:09.151594694Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:05:09.152585 containerd[1834]: time="2025-11-01T01:05:09.152512819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:05:09.152628 containerd[1834]: time="2025-11-01T01:05:09.152581406Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:05:09.152751 kubelet[3094]: E1101 01:05:09.152698 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:05:09.152751 kubelet[3094]: E1101 01:05:09.152732 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:05:09.152885 kubelet[3094]: E1101 01:05:09.152832 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pvl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59cbdf9dd7-589d7_calico-system(66631db9-6f47-4e8c-8fde-e00b56c3ece6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:05:09.154016 kubelet[3094]: E1101 01:05:09.153974 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:05:09.772268 kubelet[3094]: E1101 01:05:09.772105 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:05:11.770872 containerd[1834]: time="2025-11-01T01:05:11.770780910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:05:11.771563 kubelet[3094]: E1101 01:05:11.771199 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:05:12.118508 containerd[1834]: time="2025-11-01T01:05:12.118201816Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:05:12.119316 containerd[1834]: time="2025-11-01T01:05:12.119244077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:05:12.119399 containerd[1834]: time="2025-11-01T01:05:12.119313873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:05:12.119505 kubelet[3094]: E1101 01:05:12.119454 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:05:12.119505 kubelet[3094]: E1101 01:05:12.119484 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:05:12.119625 kubelet[3094]: E1101 01:05:12.119563 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9628,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b64f6597-jkbn2_calico-apiserver(61ef21f2-b413-4ed0-8572-35cbc407e679): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:05:12.121352 kubelet[3094]: E1101 01:05:12.121307 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:05:14.771418 containerd[1834]: time="2025-11-01T01:05:14.771300527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:05:15.129562 containerd[1834]: time="2025-11-01T01:05:15.129336240Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:05:15.134562 containerd[1834]: time="2025-11-01T01:05:15.134435805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:05:15.134562 containerd[1834]: time="2025-11-01T01:05:15.134524281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:05:15.134730 kubelet[3094]: E1101 01:05:15.134688 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:05:15.135015 kubelet[3094]: E1101 01:05:15.134728 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:05:15.135015 kubelet[3094]: E1101 01:05:15.134881 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfxv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b64f6597-6gf7n_calico-apiserver(2848d47a-0e6d-4163-bcbd-cf745e94e4c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:05:15.136103 kubelet[3094]: E1101 01:05:15.136061 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:05:16.111441 update_engine[1821]: I20251101 01:05:16.111311 1821 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:05:16.112061 update_engine[1821]: I20251101 01:05:16.111738 1821 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:05:16.112145 update_engine[1821]: I20251101 01:05:16.112108 1821 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 01:05:16.112771 update_engine[1821]: E20251101 01:05:16.112682 1821 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:05:16.112909 update_engine[1821]: I20251101 01:05:16.112793 1821 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 1 01:05:21.771982 containerd[1834]: time="2025-11-01T01:05:21.771883033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:05:22.120281 containerd[1834]: time="2025-11-01T01:05:22.120177930Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:05:22.120679 containerd[1834]: time="2025-11-01T01:05:22.120659209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:05:22.120729 containerd[1834]: time="2025-11-01T01:05:22.120708682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:05:22.120950 kubelet[3094]: E1101 01:05:22.120890 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:05:22.120950 kubelet[3094]: E1101 01:05:22.120924 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:05:22.121157 kubelet[3094]: E1101 01:05:22.121009 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqkcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4gtnc_calico-system(f10f5e42-eb9a-47ab-8781-8e9dfee85efa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:05:22.122176 kubelet[3094]: E1101 01:05:22.122161 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:05:22.771693 kubelet[3094]: E1101 01:05:22.771564 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:05:24.770326 kubelet[3094]: E1101 01:05:24.770212 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:05:24.773596 containerd[1834]: time="2025-11-01T01:05:24.772919410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:05:25.127627 containerd[1834]: time="2025-11-01T01:05:25.127528498Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:05:25.128058 containerd[1834]: time="2025-11-01T01:05:25.127988546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:05:25.128058 containerd[1834]: time="2025-11-01T01:05:25.128046336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:05:25.128134 kubelet[3094]: E1101 01:05:25.128112 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:05:25.128163 kubelet[3094]: E1101 01:05:25.128144 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:05:25.128280 kubelet[3094]: E1101 01:05:25.128245 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzr2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vnfp_calico-system(1c2067e6-df38-44d0-9df8-192be51b26fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:05:25.129929 containerd[1834]: time="2025-11-01T01:05:25.129887915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:05:25.458563 containerd[1834]: time="2025-11-01T01:05:25.458452355Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:05:25.459231 containerd[1834]: time="2025-11-01T01:05:25.459201346Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:05:25.459298 containerd[1834]: time="2025-11-01T01:05:25.459228990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:05:25.459353 kubelet[3094]: E1101 01:05:25.459333 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:05:25.459402 kubelet[3094]: E1101 01:05:25.459360 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:05:25.459476 kubelet[3094]: E1101 01:05:25.459432 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzr2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vnfp_calico-system(1c2067e6-df38-44d0-9df8-192be51b26fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:05:25.460629 kubelet[3094]: E1101 01:05:25.460612 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:05:25.770853 kubelet[3094]: E1101 01:05:25.770633 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:05:26.111446 update_engine[1821]: I20251101 01:05:26.111259 1821 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:05:26.111869 update_engine[1821]: I20251101 01:05:26.111711 1821 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:05:26.112153 update_engine[1821]: I20251101 01:05:26.112092 1821 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 01:05:26.112723 update_engine[1821]: E20251101 01:05:26.112663 1821 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:05:26.112797 update_engine[1821]: I20251101 01:05:26.112774 1821 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 1 01:05:27.770023 kubelet[3094]: E1101 01:05:27.769952 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:05:36.105554 update_engine[1821]: I20251101 01:05:36.105374 1821 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:05:36.106377 update_engine[1821]: I20251101 01:05:36.105811 1821 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:05:36.106377 update_engine[1821]: I20251101 01:05:36.106164 1821 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 01:05:36.106883 update_engine[1821]: E20251101 01:05:36.106796 1821 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:05:36.107017 update_engine[1821]: I20251101 01:05:36.106886 1821 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 1 01:05:36.107017 update_engine[1821]: I20251101 01:05:36.106907 1821 omaha_request_action.cc:617] Omaha request response: Nov 1 01:05:36.107139 update_engine[1821]: E20251101 01:05:36.107017 1821 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 1 01:05:36.107139 update_engine[1821]: I20251101 01:05:36.107055 1821 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 1 01:05:36.107139 update_engine[1821]: I20251101 01:05:36.107066 1821 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 01:05:36.107139 update_engine[1821]: I20251101 01:05:36.107075 1821 update_attempter.cc:306] Processing Done. Nov 1 01:05:36.107139 update_engine[1821]: E20251101 01:05:36.107095 1821 update_attempter.cc:619] Update failed. Nov 1 01:05:36.107139 update_engine[1821]: I20251101 01:05:36.107106 1821 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 1 01:05:36.107139 update_engine[1821]: I20251101 01:05:36.107115 1821 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 1 01:05:36.107139 update_engine[1821]: I20251101 01:05:36.107126 1821 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 1 01:05:36.107578 update_engine[1821]: I20251101 01:05:36.107241 1821 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 01:05:36.107578 update_engine[1821]: I20251101 01:05:36.107287 1821 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 1 01:05:36.107578 update_engine[1821]: I20251101 01:05:36.107300 1821 omaha_request_action.cc:272] Request: Nov 1 01:05:36.107578 update_engine[1821]: Nov 1 01:05:36.107578 update_engine[1821]: Nov 1 01:05:36.107578 update_engine[1821]: Nov 1 01:05:36.107578 update_engine[1821]: Nov 1 01:05:36.107578 update_engine[1821]: Nov 1 01:05:36.107578 update_engine[1821]: Nov 1 01:05:36.107578 update_engine[1821]: I20251101 01:05:36.107311 1821 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 01:05:36.107578 update_engine[1821]: I20251101 01:05:36.107560 1821 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 01:05:36.108155 update_engine[1821]: I20251101 01:05:36.107841 1821 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 01:05:36.108252 locksmithd[1869]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 1 01:05:36.108663 update_engine[1821]: E20251101 01:05:36.108268 1821 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 01:05:36.108663 update_engine[1821]: I20251101 01:05:36.108340 1821 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 1 01:05:36.108663 update_engine[1821]: I20251101 01:05:36.108356 1821 omaha_request_action.cc:617] Omaha request response: Nov 1 01:05:36.108663 update_engine[1821]: I20251101 01:05:36.108367 1821 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 01:05:36.108663 update_engine[1821]: I20251101 01:05:36.108376 1821 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 01:05:36.108663 update_engine[1821]: I20251101 01:05:36.108386 1821 update_attempter.cc:306] Processing Done. Nov 1 01:05:36.108663 update_engine[1821]: I20251101 01:05:36.108397 1821 update_attempter.cc:310] Error event sent. Nov 1 01:05:36.108663 update_engine[1821]: I20251101 01:05:36.108410 1821 update_check_scheduler.cc:74] Next update check in 45m26s Nov 1 01:05:36.109074 locksmithd[1869]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 1 01:05:36.770925 kubelet[3094]: E1101 01:05:36.770790 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:05:36.772523 kubelet[3094]: E1101 01:05:36.772375 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:05:38.771575 kubelet[3094]: E1101 01:05:38.771445 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:05:39.768815 kubelet[3094]: E1101 01:05:39.768745 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:05:39.768815 kubelet[3094]: E1101 01:05:39.768757 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:05:40.773838 kubelet[3094]: E1101 01:05:40.773698 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:05:47.769649 kubelet[3094]: E1101 01:05:47.769605 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:05:49.770775 kubelet[3094]: E1101 01:05:49.770651 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:05:51.770572 kubelet[3094]: E1101 01:05:51.770471 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:05:52.771629 kubelet[3094]: E1101 01:05:52.771509 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:05:53.771121 kubelet[3094]: E1101 01:05:53.771013 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:05:54.771281 kubelet[3094]: E1101 01:05:54.771178 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:06:00.775371 kubelet[3094]: E1101 01:06:00.775194 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:06:03.769785 kubelet[3094]: E1101 01:06:03.769757 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:06:03.769785 kubelet[3094]: E1101 01:06:03.769757 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:06:04.771934 kubelet[3094]: E1101 01:06:04.771901 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:06:07.770069 kubelet[3094]: E1101 01:06:07.770020 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:06:08.780626 kubelet[3094]: E1101 01:06:08.780509 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:06:11.769409 kubelet[3094]: E1101 01:06:11.769382 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:06:15.771035 kubelet[3094]: E1101 01:06:15.770912 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:06:16.770205 kubelet[3094]: E1101 01:06:16.770144 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:06:19.773547 kubelet[3094]: E1101 01:06:19.773519 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:06:19.773547 kubelet[3094]: E1101 01:06:19.773513 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:06:19.773926 kubelet[3094]: E1101 01:06:19.773728 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:06:26.771978 kubelet[3094]: E1101 01:06:26.771882 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:06:27.769129 kubelet[3094]: E1101 01:06:27.769081 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:06:28.770136 kubelet[3094]: E1101 01:06:28.770083 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:06:31.770330 kubelet[3094]: E1101 01:06:31.770243 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:06:33.771373 kubelet[3094]: E1101 01:06:33.771281 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:06:33.772765 kubelet[3094]: E1101 01:06:33.772006 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:06:38.771104 kubelet[3094]: E1101 01:06:38.771076 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:06:42.770992 kubelet[3094]: E1101 01:06:42.770877 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:06:42.772057 kubelet[3094]: E1101 01:06:42.770962 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:06:46.770169 kubelet[3094]: E1101 01:06:46.770104 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:06:47.769725 kubelet[3094]: E1101 01:06:47.769677 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:06:48.771756 kubelet[3094]: E1101 01:06:48.771667 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:06:51.769565 kubelet[3094]: E1101 01:06:51.769539 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:06:55.768877 kubelet[3094]: E1101 01:06:55.768805 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:06:56.772161 kubelet[3094]: E1101 01:06:56.772128 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:06:59.769386 kubelet[3094]: E1101 01:06:59.769343 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:07:00.770594 kubelet[3094]: E1101 01:07:00.770537 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:07:01.772483 kubelet[3094]: E1101 01:07:01.772373 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:07:04.771810 kubelet[3094]: E1101 01:07:04.771619 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:07:07.769136 kubelet[3094]: E1101 01:07:07.769113 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:07:07.769136 kubelet[3094]: E1101 01:07:07.769113 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:07:13.768969 kubelet[3094]: E1101 01:07:13.768943 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:07:13.768969 kubelet[3094]: E1101 01:07:13.768959 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:07:15.769739 kubelet[3094]: E1101 01:07:15.769697 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:07:16.771092 kubelet[3094]: E1101 01:07:16.771035 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:07:21.770459 kubelet[3094]: E1101 01:07:21.770372 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:07:22.771312 kubelet[3094]: E1101 01:07:22.771203 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:07:25.770411 kubelet[3094]: E1101 01:07:25.770280 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:07:25.770411 kubelet[3094]: E1101 01:07:25.770386 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:07:27.769304 kubelet[3094]: E1101 01:07:27.769279 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:07:31.769202 kubelet[3094]: E1101 01:07:31.769177 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:07:33.771095 kubelet[3094]: E1101 01:07:33.771000 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:07:35.771520 kubelet[3094]: E1101 01:07:35.771401 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:07:39.769611 kubelet[3094]: E1101 01:07:39.769586 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:07:40.772183 kubelet[3094]: E1101 01:07:40.772121 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:07:40.772183 kubelet[3094]: E1101 01:07:40.772125 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:07:42.769125 kubelet[3094]: E1101 01:07:42.769094 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:07:46.774250 kubelet[3094]: E1101 01:07:46.774125 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:07:47.769753 kubelet[3094]: E1101 01:07:47.769724 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:07:51.769622 kubelet[3094]: E1101 01:07:51.769533 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:07:53.771852 containerd[1834]: time="2025-11-01T01:07:53.771719932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:07:54.143586 containerd[1834]: time="2025-11-01T01:07:54.143488649Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:07:54.144005 containerd[1834]: time="2025-11-01T01:07:54.143952265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:07:54.144042 containerd[1834]: time="2025-11-01T01:07:54.144003513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:07:54.144158 kubelet[3094]: E1101 01:07:54.144136 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:07:54.144391 kubelet[3094]: E1101 01:07:54.144168 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:07:54.144391 kubelet[3094]: E1101 01:07:54.144250 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0af9fc2fd2874665a6a7d3eedbb1bf4a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc55k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57bf865dbd-dvkh4_calico-system(53044ffa-faec-4388-b8a9-277f38bf6718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:07:54.145873 containerd[1834]: time="2025-11-01T01:07:54.145862726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:07:54.478413 containerd[1834]: time="2025-11-01T01:07:54.478315079Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:07:54.479425 containerd[1834]: time="2025-11-01T01:07:54.479342784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:07:54.479425 containerd[1834]: time="2025-11-01T01:07:54.479409651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:07:54.479553 kubelet[3094]: E1101 01:07:54.479496 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:07:54.479553 kubelet[3094]: E1101 01:07:54.479528 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:07:54.479637 kubelet[3094]: E1101 01:07:54.479597 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sc55k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57bf865dbd-dvkh4_calico-system(53044ffa-faec-4388-b8a9-277f38bf6718): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:07:54.480806 kubelet[3094]: E1101 01:07:54.480760 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:07:54.771998 containerd[1834]: time="2025-11-01T01:07:54.771771464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:07:54.772728 kubelet[3094]: E1101 01:07:54.772485 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:07:55.139540 containerd[1834]: time="2025-11-01T01:07:55.139408464Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:07:55.140100 containerd[1834]: time="2025-11-01T01:07:55.140004199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:07:55.140154 containerd[1834]: time="2025-11-01T01:07:55.140073289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:07:55.140395 kubelet[3094]: E1101 01:07:55.140193 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:07:55.140465 kubelet[3094]: E1101 01:07:55.140401 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:07:55.141363 kubelet[3094]: E1101 01:07:55.140743 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n9628,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b64f6597-jkbn2_calico-apiserver(61ef21f2-b413-4ed0-8572-35cbc407e679): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:07:55.141841 kubelet[3094]: E1101 01:07:55.141801 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:08:00.773292 containerd[1834]: time="2025-11-01T01:08:00.773184317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:08:01.138772 containerd[1834]: time="2025-11-01T01:08:01.138506558Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:08:01.139299 containerd[1834]: time="2025-11-01T01:08:01.139229814Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:08:01.139364 containerd[1834]: time="2025-11-01T01:08:01.139305101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:08:01.139559 kubelet[3094]: E1101 01:08:01.139507 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:08:01.139559 kubelet[3094]: E1101 01:08:01.139539 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:08:01.139811 kubelet[3094]: E1101 01:08:01.139651 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pvl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59cbdf9dd7-589d7_calico-system(66631db9-6f47-4e8c-8fde-e00b56c3ece6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:08:01.140821 kubelet[3094]: E1101 01:08:01.140806 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:08:02.776548 containerd[1834]: time="2025-11-01T01:08:02.776435785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:08:03.155298 containerd[1834]: time="2025-11-01T01:08:03.155021353Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:08:03.156144 containerd[1834]: time="2025-11-01T01:08:03.156067703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:08:03.156184 containerd[1834]: time="2025-11-01T01:08:03.156137357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:08:03.156289 kubelet[3094]: E1101 01:08:03.156243 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:08:03.156289 kubelet[3094]: E1101 01:08:03.156274 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:08:03.156496 kubelet[3094]: E1101 01:08:03.156355 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqkcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4gtnc_calico-system(f10f5e42-eb9a-47ab-8781-8e9dfee85efa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:08:03.157560 kubelet[3094]: E1101 01:08:03.157518 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:08:05.771037 containerd[1834]: time="2025-11-01T01:08:05.770929939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:08:06.127173 containerd[1834]: time="2025-11-01T01:08:06.126928693Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:08:06.127811 containerd[1834]: time="2025-11-01T01:08:06.127785372Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:08:06.127877 containerd[1834]: time="2025-11-01T01:08:06.127856978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:08:06.127998 kubelet[3094]: E1101 01:08:06.127968 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:08:06.128248 kubelet[3094]: E1101 01:08:06.128008 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:08:06.128248 kubelet[3094]: E1101 01:08:06.128116 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfxv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65b64f6597-6gf7n_calico-apiserver(2848d47a-0e6d-4163-bcbd-cf745e94e4c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:08:06.129268 kubelet[3094]: E1101 01:08:06.129251 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:08:07.769941 kubelet[3094]: E1101 01:08:07.769870 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:08:07.770388 containerd[1834]: time="2025-11-01T01:08:07.770065700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:08:08.124394 containerd[1834]: time="2025-11-01T01:08:08.124126794Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:08:08.125141 containerd[1834]: time="2025-11-01T01:08:08.125065601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:08:08.125141 containerd[1834]: time="2025-11-01T01:08:08.125130780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:08:08.125286 kubelet[3094]: E1101 01:08:08.125224 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:08:08.125286 kubelet[3094]: E1101 01:08:08.125260 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:08:08.125359 kubelet[3094]: E1101 01:08:08.125339 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzr2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vnfp_calico-system(1c2067e6-df38-44d0-9df8-192be51b26fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:08:08.126862 containerd[1834]: time="2025-11-01T01:08:08.126821976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:08:08.463111 containerd[1834]: time="2025-11-01T01:08:08.463083481Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:08:08.463729 containerd[1834]: time="2025-11-01T01:08:08.463675535Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:08:08.463766 containerd[1834]: time="2025-11-01T01:08:08.463732646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:08:08.463860 kubelet[3094]: E1101 01:08:08.463836 3094 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:08:08.463890 kubelet[3094]: E1101 01:08:08.463869 3094 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:08:08.463968 kubelet[3094]: E1101 01:08:08.463945 3094 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzr2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-9vnfp_calico-system(1c2067e6-df38-44d0-9df8-192be51b26fc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:08:08.465107 kubelet[3094]: E1101 01:08:08.465090 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:08:08.771556 kubelet[3094]: E1101 01:08:08.771310 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:08:13.770611 kubelet[3094]: E1101 01:08:13.770489 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:08:15.771016 kubelet[3094]: E1101 01:08:15.770917 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:08:17.188177 systemd[1]: Started sshd@9-145.40.82.59:22-139.178.89.65:58284.service - OpenSSH per-connection server daemon (139.178.89.65:58284). Nov 1 01:08:17.257973 sshd[7421]: Accepted publickey for core from 139.178.89.65 port 58284 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:08:17.261640 sshd[7421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:08:17.274047 systemd-logind[1816]: New session 12 of user core. Nov 1 01:08:17.295699 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 01:08:17.419942 sshd[7421]: pam_unix(sshd:session): session closed for user core Nov 1 01:08:17.421596 systemd[1]: sshd@9-145.40.82.59:22-139.178.89.65:58284.service: Deactivated successfully. Nov 1 01:08:17.422510 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 01:08:17.423137 systemd-logind[1816]: Session 12 logged out. Waiting for processes to exit. Nov 1 01:08:17.423829 systemd-logind[1816]: Removed session 12. Nov 1 01:08:18.770150 kubelet[3094]: E1101 01:08:18.770064 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:08:20.770594 kubelet[3094]: E1101 01:08:20.770535 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:08:20.771293 kubelet[3094]: E1101 01:08:20.771177 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:08:22.440927 systemd[1]: Started sshd@10-145.40.82.59:22-139.178.89.65:58292.service - OpenSSH per-connection server daemon (139.178.89.65:58292). Nov 1 01:08:22.468210 sshd[7451]: Accepted publickey for core from 139.178.89.65 port 58292 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:08:22.468970 sshd[7451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:08:22.471622 systemd-logind[1816]: New session 13 of user core. Nov 1 01:08:22.492727 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 01:08:22.651516 sshd[7451]: pam_unix(sshd:session): session closed for user core Nov 1 01:08:22.654147 systemd[1]: sshd@10-145.40.82.59:22-139.178.89.65:58292.service: Deactivated successfully. Nov 1 01:08:22.655769 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 01:08:22.657406 systemd-logind[1816]: Session 13 logged out. Waiting for processes to exit. Nov 1 01:08:22.658848 systemd-logind[1816]: Removed session 13. Nov 1 01:08:22.772455 kubelet[3094]: E1101 01:08:22.772194 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:08:26.771131 kubelet[3094]: E1101 01:08:26.771034 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:08:27.664530 systemd[1]: Started sshd@11-145.40.82.59:22-139.178.89.65:42974.service - OpenSSH per-connection server daemon (139.178.89.65:42974). Nov 1 01:08:27.691990 sshd[7480]: Accepted publickey for core from 139.178.89.65 port 42974 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:08:27.692848 sshd[7480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:08:27.695660 systemd-logind[1816]: New session 14 of user core. Nov 1 01:08:27.712454 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 01:08:27.793930 sshd[7480]: pam_unix(sshd:session): session closed for user core Nov 1 01:08:27.812004 systemd[1]: sshd@11-145.40.82.59:22-139.178.89.65:42974.service: Deactivated successfully. Nov 1 01:08:27.812839 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 01:08:27.813559 systemd-logind[1816]: Session 14 logged out. Waiting for processes to exit. Nov 1 01:08:27.814288 systemd[1]: Started sshd@12-145.40.82.59:22-139.178.89.65:42984.service - OpenSSH per-connection server daemon (139.178.89.65:42984). Nov 1 01:08:27.814700 systemd-logind[1816]: Removed session 14. Nov 1 01:08:27.841893 sshd[7507]: Accepted publickey for core from 139.178.89.65 port 42984 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:08:27.842661 sshd[7507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:08:27.844930 systemd-logind[1816]: New session 15 of user core. Nov 1 01:08:27.862449 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 01:08:27.989150 sshd[7507]: pam_unix(sshd:session): session closed for user core Nov 1 01:08:27.999387 systemd[1]: sshd@12-145.40.82.59:22-139.178.89.65:42984.service: Deactivated successfully. Nov 1 01:08:28.000436 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 01:08:28.001203 systemd-logind[1816]: Session 15 logged out. Waiting for processes to exit. Nov 1 01:08:28.001934 systemd[1]: Started sshd@13-145.40.82.59:22-139.178.89.65:42994.service - OpenSSH per-connection server daemon (139.178.89.65:42994). Nov 1 01:08:28.002353 systemd-logind[1816]: Removed session 15. Nov 1 01:08:28.030971 sshd[7533]: Accepted publickey for core from 139.178.89.65 port 42994 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:08:28.031911 sshd[7533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:08:28.034798 systemd-logind[1816]: New session 16 of user core. Nov 1 01:08:28.052456 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 01:08:28.174812 sshd[7533]: pam_unix(sshd:session): session closed for user core Nov 1 01:08:28.176768 systemd[1]: sshd@13-145.40.82.59:22-139.178.89.65:42994.service: Deactivated successfully. Nov 1 01:08:28.177606 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 01:08:28.178006 systemd-logind[1816]: Session 16 logged out. Waiting for processes to exit. Nov 1 01:08:28.178573 systemd-logind[1816]: Removed session 16. Nov 1 01:08:29.771502 kubelet[3094]: E1101 01:08:29.771395 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:08:30.772757 kubelet[3094]: E1101 01:08:30.772680 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:08:33.208207 systemd[1]: Started sshd@14-145.40.82.59:22-139.178.89.65:43008.service - OpenSSH per-connection server daemon (139.178.89.65:43008). Nov 1 01:08:33.263438 sshd[7604]: Accepted publickey for core from 139.178.89.65 port 43008 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:08:33.264205 sshd[7604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:08:33.266745 systemd-logind[1816]: New session 17 of user core. Nov 1 01:08:33.285384 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 01:08:33.400800 sshd[7604]: pam_unix(sshd:session): session closed for user core Nov 1 01:08:33.402500 systemd[1]: sshd@14-145.40.82.59:22-139.178.89.65:43008.service: Deactivated successfully. Nov 1 01:08:33.403422 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 01:08:33.404136 systemd-logind[1816]: Session 17 logged out. Waiting for processes to exit. Nov 1 01:08:33.404841 systemd-logind[1816]: Removed session 17. Nov 1 01:08:35.769362 kubelet[3094]: E1101 01:08:35.769308 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:08:35.769658 kubelet[3094]: E1101 01:08:35.769560 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:08:37.771969 kubelet[3094]: E1101 01:08:37.771858 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:08:38.439427 systemd[1]: Started sshd@15-145.40.82.59:22-139.178.89.65:32884.service - OpenSSH per-connection server daemon (139.178.89.65:32884). Nov 1 01:08:38.464628 sshd[7648]: Accepted publickey for core from 139.178.89.65 port 32884 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:08:38.465407 sshd[7648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:08:38.468079 systemd-logind[1816]: New session 18 of user core. Nov 1 01:08:38.484419 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 01:08:38.612735 sshd[7648]: pam_unix(sshd:session): session closed for user core Nov 1 01:08:38.614576 systemd[1]: sshd@15-145.40.82.59:22-139.178.89.65:32884.service: Deactivated successfully. Nov 1 01:08:38.615660 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 01:08:38.616509 systemd-logind[1816]: Session 18 logged out. Waiting for processes to exit. Nov 1 01:08:38.617174 systemd-logind[1816]: Removed session 18. Nov 1 01:08:41.771028 kubelet[3094]: E1101 01:08:41.770905 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:08:42.771330 kubelet[3094]: E1101 01:08:42.771217 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:08:43.630191 systemd[1]: Started sshd@16-145.40.82.59:22-139.178.89.65:32890.service - OpenSSH per-connection server daemon (139.178.89.65:32890). Nov 1 01:08:43.664717 sshd[7676]: Accepted publickey for core from 139.178.89.65 port 32890 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:08:43.665511 sshd[7676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:08:43.667942 systemd-logind[1816]: New session 19 of user core. Nov 1 01:08:43.668513 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 01:08:43.745951 sshd[7676]: pam_unix(sshd:session): session closed for user core Nov 1 01:08:43.747533 systemd[1]: sshd@16-145.40.82.59:22-139.178.89.65:32890.service: Deactivated successfully. Nov 1 01:08:43.748452 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 01:08:43.749093 systemd-logind[1816]: Session 19 logged out. Waiting for processes to exit. Nov 1 01:08:43.749691 systemd-logind[1816]: Removed session 19. Nov 1 01:08:43.769117 kubelet[3094]: E1101 01:08:43.769091 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:08:47.769578 kubelet[3094]: E1101 01:08:47.769521 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6" Nov 1 01:08:47.769868 kubelet[3094]: E1101 01:08:47.769798 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:08:48.774350 systemd[1]: Started sshd@17-145.40.82.59:22-139.178.89.65:40476.service - OpenSSH per-connection server daemon (139.178.89.65:40476). Nov 1 01:08:48.835799 sshd[7704]: Accepted publickey for core from 139.178.89.65 port 40476 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:08:48.837381 sshd[7704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:08:48.842085 systemd-logind[1816]: New session 20 of user core. Nov 1 01:08:48.862593 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 01:08:48.998396 sshd[7704]: pam_unix(sshd:session): session closed for user core Nov 1 01:08:49.015063 systemd[1]: sshd@17-145.40.82.59:22-139.178.89.65:40476.service: Deactivated successfully. Nov 1 01:08:49.015913 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 01:08:49.016689 systemd-logind[1816]: Session 20 logged out. Waiting for processes to exit. Nov 1 01:08:49.017359 systemd[1]: Started sshd@18-145.40.82.59:22-139.178.89.65:40478.service - OpenSSH per-connection server daemon (139.178.89.65:40478). Nov 1 01:08:49.017859 systemd-logind[1816]: Removed session 20. Nov 1 01:08:49.046780 sshd[7729]: Accepted publickey for core from 139.178.89.65 port 40478 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:08:49.047804 sshd[7729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:08:49.051507 systemd-logind[1816]: New session 21 of user core. Nov 1 01:08:49.075536 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 01:08:49.224809 sshd[7729]: pam_unix(sshd:session): session closed for user core Nov 1 01:08:49.238240 systemd[1]: sshd@18-145.40.82.59:22-139.178.89.65:40478.service: Deactivated successfully. Nov 1 01:08:49.239166 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 01:08:49.239880 systemd-logind[1816]: Session 21 logged out. Waiting for processes to exit. Nov 1 01:08:49.240702 systemd[1]: Started sshd@19-145.40.82.59:22-139.178.89.65:40480.service - OpenSSH per-connection server daemon (139.178.89.65:40480). Nov 1 01:08:49.241349 systemd-logind[1816]: Removed session 21. Nov 1 01:08:49.271237 sshd[7753]: Accepted publickey for core from 139.178.89.65 port 40480 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:08:49.272525 sshd[7753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:08:49.277110 systemd-logind[1816]: New session 22 of user core. Nov 1 01:08:49.296522 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 01:08:49.956702 sshd[7753]: pam_unix(sshd:session): session closed for user core Nov 1 01:08:49.968977 systemd[1]: sshd@19-145.40.82.59:22-139.178.89.65:40480.service: Deactivated successfully. Nov 1 01:08:49.969811 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 01:08:49.970504 systemd-logind[1816]: Session 22 logged out. Waiting for processes to exit. Nov 1 01:08:49.971187 systemd[1]: Started sshd@20-145.40.82.59:22-139.178.89.65:40484.service - OpenSSH per-connection server daemon (139.178.89.65:40484). Nov 1 01:08:49.971644 systemd-logind[1816]: Removed session 22. Nov 1 01:08:49.998869 sshd[7785]: Accepted publickey for core from 139.178.89.65 port 40484 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:08:49.999710 sshd[7785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:08:50.002207 systemd-logind[1816]: New session 23 of user core. Nov 1 01:08:50.020393 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 01:08:50.156382 sshd[7785]: pam_unix(sshd:session): session closed for user core Nov 1 01:08:50.168149 systemd[1]: sshd@20-145.40.82.59:22-139.178.89.65:40484.service: Deactivated successfully. Nov 1 01:08:50.169075 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 01:08:50.169793 systemd-logind[1816]: Session 23 logged out. Waiting for processes to exit. Nov 1 01:08:50.170519 systemd[1]: Started sshd@21-145.40.82.59:22-139.178.89.65:40498.service - OpenSSH per-connection server daemon (139.178.89.65:40498). Nov 1 01:08:50.171033 systemd-logind[1816]: Removed session 23. Nov 1 01:08:50.197810 sshd[7813]: Accepted publickey for core from 139.178.89.65 port 40498 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:08:50.198648 sshd[7813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:08:50.201021 systemd-logind[1816]: New session 24 of user core. Nov 1 01:08:50.216408 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 01:08:50.304566 sshd[7813]: pam_unix(sshd:session): session closed for user core Nov 1 01:08:50.306005 systemd[1]: sshd@21-145.40.82.59:22-139.178.89.65:40498.service: Deactivated successfully. Nov 1 01:08:50.306914 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 01:08:50.307573 systemd-logind[1816]: Session 24 logged out. Waiting for processes to exit. Nov 1 01:08:50.308118 systemd-logind[1816]: Removed session 24. Nov 1 01:08:50.778076 kubelet[3094]: E1101 01:08:50.777940 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-9vnfp" podUID="1c2067e6-df38-44d0-9df8-192be51b26fc" Nov 1 01:08:54.776879 kubelet[3094]: E1101 01:08:54.776787 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-jkbn2" podUID="61ef21f2-b413-4ed0-8572-35cbc407e679" Nov 1 01:08:55.331956 systemd[1]: Started sshd@22-145.40.82.59:22-139.178.89.65:40504.service - OpenSSH per-connection server daemon (139.178.89.65:40504). Nov 1 01:08:55.387274 sshd[7841]: Accepted publickey for core from 139.178.89.65 port 40504 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:08:55.388813 sshd[7841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:08:55.393811 systemd-logind[1816]: New session 25 of user core. Nov 1 01:08:55.404393 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 01:08:55.537020 sshd[7841]: pam_unix(sshd:session): session closed for user core Nov 1 01:08:55.538574 systemd[1]: sshd@22-145.40.82.59:22-139.178.89.65:40504.service: Deactivated successfully. Nov 1 01:08:55.539553 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 01:08:55.540229 systemd-logind[1816]: Session 25 logged out. Waiting for processes to exit. Nov 1 01:08:55.540796 systemd-logind[1816]: Removed session 25. Nov 1 01:08:56.771088 kubelet[3094]: E1101 01:08:56.771003 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4gtnc" podUID="f10f5e42-eb9a-47ab-8781-8e9dfee85efa" Nov 1 01:08:57.770841 kubelet[3094]: E1101 01:08:57.770703 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59cbdf9dd7-589d7" podUID="66631db9-6f47-4e8c-8fde-e00b56c3ece6" Nov 1 01:08:59.769514 kubelet[3094]: E1101 01:08:59.769490 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57bf865dbd-dvkh4" podUID="53044ffa-faec-4388-b8a9-277f38bf6718" Nov 1 01:09:00.551871 systemd[1]: Started sshd@23-145.40.82.59:22-139.178.89.65:47986.service - OpenSSH per-connection server daemon (139.178.89.65:47986). Nov 1 01:09:00.600209 sshd[7901]: Accepted publickey for core from 139.178.89.65 port 47986 ssh2: RSA SHA256:7Ytc58gyNID63TmFqpvnPOSHtyQqpisrsHUaCNkqIsk Nov 1 01:09:00.602196 sshd[7901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:09:00.609311 systemd-logind[1816]: New session 26 of user core. Nov 1 01:09:00.636477 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 01:09:00.726393 sshd[7901]: pam_unix(sshd:session): session closed for user core Nov 1 01:09:00.728314 systemd[1]: sshd@23-145.40.82.59:22-139.178.89.65:47986.service: Deactivated successfully. Nov 1 01:09:00.729157 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 01:09:00.729565 systemd-logind[1816]: Session 26 logged out. Waiting for processes to exit. Nov 1 01:09:00.730069 systemd-logind[1816]: Removed session 26. Nov 1 01:09:00.772519 kubelet[3094]: E1101 01:09:00.772430 3094 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65b64f6597-6gf7n" podUID="2848d47a-0e6d-4163-bcbd-cf745e94e4c6"