Nov 24 06:58:27.938764 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:49:05 -00 2025 Nov 24 06:58:27.938844 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 06:58:27.938862 kernel: BIOS-provided physical RAM map: Nov 24 06:58:27.938870 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 24 06:58:27.938876 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 24 06:58:27.938883 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 24 06:58:27.938891 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 24 06:58:27.938905 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 24 06:58:27.938912 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 24 06:58:27.938943 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 24 06:58:27.938950 kernel: NX (Execute Disable) protection: active Nov 24 06:58:27.938960 kernel: APIC: Static calls initialized Nov 24 06:58:27.938967 kernel: SMBIOS 2.8 present. Nov 24 06:58:27.938975 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 24 06:58:27.938983 kernel: DMI: Memory slots populated: 1/1 Nov 24 06:58:27.938991 kernel: Hypervisor detected: KVM Nov 24 06:58:27.939005 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 24 06:58:27.939013 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 24 06:58:27.939021 kernel: kvm-clock: using sched offset of 4745319640 cycles Nov 24 06:58:27.939030 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 24 06:58:27.939038 kernel: tsc: Detected 2494.138 MHz processor Nov 24 06:58:27.939046 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 06:58:27.939055 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 06:58:27.939063 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 24 06:58:27.939071 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 24 06:58:27.939079 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 06:58:27.939090 kernel: ACPI: Early table checksum verification disabled Nov 24 06:58:27.939098 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 24 06:58:27.939106 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:58:27.939114 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:58:27.939122 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:58:27.939130 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 24 06:58:27.939137 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:58:27.939145 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:58:27.939157 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:58:27.939165 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:58:27.939172 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 24 06:58:27.939180 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 24 06:58:27.939188 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 24 06:58:27.939197 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 24 06:58:27.939209 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 24 06:58:27.939221 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 24 06:58:27.939229 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 24 06:58:27.939237 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 24 06:58:27.939246 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 24 06:58:27.939254 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Nov 24 06:58:27.939262 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Nov 24 06:58:27.939271 kernel: Zone ranges: Nov 24 06:58:27.939282 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 06:58:27.939290 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 24 06:58:27.939299 kernel: Normal empty Nov 24 06:58:27.939307 kernel: Device empty Nov 24 06:58:27.939315 kernel: Movable zone start for each node Nov 24 06:58:27.939323 kernel: Early memory node ranges Nov 24 06:58:27.939332 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 24 06:58:27.939340 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 24 06:58:27.939348 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 24 06:58:27.939357 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 06:58:27.939369 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 24 06:58:27.939377 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 24 06:58:27.939385 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 24 06:58:27.939396 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 24 06:58:27.939404 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 24 06:58:27.939414 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 24 06:58:27.939423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 24 06:58:27.939431 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 06:58:27.939441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 24 06:58:27.939453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 24 06:58:27.939462 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 06:58:27.939470 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 24 06:58:27.939478 kernel: TSC deadline timer available Nov 24 06:58:27.939486 kernel: CPU topo: Max. logical packages: 1 Nov 24 06:58:27.939495 kernel: CPU topo: Max. logical dies: 1 Nov 24 06:58:27.939503 kernel: CPU topo: Max. dies per package: 1 Nov 24 06:58:27.939511 kernel: CPU topo: Max. threads per core: 1 Nov 24 06:58:27.939519 kernel: CPU topo: Num. cores per package: 2 Nov 24 06:58:27.939531 kernel: CPU topo: Num. threads per package: 2 Nov 24 06:58:27.939539 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 24 06:58:27.939547 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 24 06:58:27.939555 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 24 06:58:27.939582 kernel: Booting paravirtualized kernel on KVM Nov 24 06:58:27.939591 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 06:58:27.939600 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 24 06:58:27.939609 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 24 06:58:27.939617 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 24 06:58:27.939629 kernel: pcpu-alloc: [0] 0 1 Nov 24 06:58:27.939640 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 24 06:58:27.939655 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 06:58:27.939668 kernel: random: crng init done Nov 24 06:58:27.939680 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 24 06:58:27.939689 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 24 06:58:27.939697 kernel: Fallback order for Node 0: 0 Nov 24 06:58:27.939705 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Nov 24 06:58:27.939803 kernel: Policy zone: DMA32 Nov 24 06:58:27.939816 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 06:58:27.939824 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 24 06:58:27.939833 kernel: Kernel/User page tables isolation: enabled Nov 24 06:58:27.939841 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 06:58:27.939850 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 06:58:27.939858 kernel: Dynamic Preempt: voluntary Nov 24 06:58:27.939866 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 06:58:27.939876 kernel: rcu: RCU event tracing is enabled. Nov 24 06:58:27.939885 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 24 06:58:27.939897 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 06:58:27.939906 kernel: Rude variant of Tasks RCU enabled. Nov 24 06:58:27.939914 kernel: Tracing variant of Tasks RCU enabled. Nov 24 06:58:27.939922 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 06:58:27.939930 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 24 06:58:27.939939 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 06:58:27.939950 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 06:58:27.939959 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 24 06:58:27.939967 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 24 06:58:27.939979 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 06:58:27.939988 kernel: Console: colour VGA+ 80x25 Nov 24 06:58:27.939996 kernel: printk: legacy console [tty0] enabled Nov 24 06:58:27.940004 kernel: printk: legacy console [ttyS0] enabled Nov 24 06:58:27.940012 kernel: ACPI: Core revision 20240827 Nov 24 06:58:27.940021 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 24 06:58:27.940040 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 06:58:27.940052 kernel: x2apic enabled Nov 24 06:58:27.940061 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 06:58:27.940070 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 24 06:58:27.940079 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 24 06:58:27.940090 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Nov 24 06:58:27.940102 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 24 06:58:27.940112 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 24 06:58:27.940121 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 06:58:27.940130 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 06:58:27.940143 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 06:58:27.940152 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 24 06:58:27.940161 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 24 06:58:27.940170 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 24 06:58:27.940179 kernel: MDS: Mitigation: Clear CPU buffers Nov 24 06:58:27.940188 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 24 06:58:27.940197 kernel: active return thunk: its_return_thunk Nov 24 06:58:27.940206 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 24 06:58:27.940215 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 06:58:27.940227 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 06:58:27.940235 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 06:58:27.940244 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 06:58:27.940253 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 24 06:58:27.940262 kernel: Freeing SMP alternatives memory: 32K Nov 24 06:58:27.940271 kernel: pid_max: default: 32768 minimum: 301 Nov 24 06:58:27.940280 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 06:58:27.940289 kernel: landlock: Up and running. Nov 24 06:58:27.940298 kernel: SELinux: Initializing. Nov 24 06:58:27.940311 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 24 06:58:27.940320 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 24 06:58:27.940329 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 24 06:58:27.940338 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 24 06:58:27.940347 kernel: signal: max sigframe size: 1776 Nov 24 06:58:27.940355 kernel: rcu: Hierarchical SRCU implementation. Nov 24 06:58:27.940364 kernel: rcu: Max phase no-delay instances is 400. Nov 24 06:58:27.940373 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 24 06:58:27.940382 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 24 06:58:27.940395 kernel: smp: Bringing up secondary CPUs ... Nov 24 06:58:27.940406 kernel: smpboot: x86: Booting SMP configuration: Nov 24 06:58:27.940415 kernel: .... node #0, CPUs: #1 Nov 24 06:58:27.940424 kernel: smp: Brought up 1 node, 2 CPUs Nov 24 06:58:27.940433 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Nov 24 06:58:27.940443 kernel: Memory: 1958716K/2096612K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46200K init, 2560K bss, 133332K reserved, 0K cma-reserved) Nov 24 06:58:27.940452 kernel: devtmpfs: initialized Nov 24 06:58:27.940461 kernel: x86/mm: Memory block size: 128MB Nov 24 06:58:27.940470 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 06:58:27.940482 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 24 06:58:27.940491 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 06:58:27.940500 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 06:58:27.940509 kernel: audit: initializing netlink subsys (disabled) Nov 24 06:58:27.940518 kernel: audit: type=2000 audit(1763967504.493:1): state=initialized audit_enabled=0 res=1 Nov 24 06:58:27.940527 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 06:58:27.940536 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 06:58:27.940544 kernel: cpuidle: using governor menu Nov 24 06:58:27.940553 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 06:58:27.940565 kernel: dca service started, version 1.12.1 Nov 24 06:58:27.940574 kernel: PCI: Using configuration type 1 for base access Nov 24 06:58:27.940583 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 06:58:27.940592 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 06:58:27.940601 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 06:58:27.940610 kernel: ACPI: Added _OSI(Module Device) Nov 24 06:58:27.940619 kernel: ACPI: Added _OSI(Processor Device) Nov 24 06:58:27.940627 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 06:58:27.940636 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 24 06:58:27.940648 kernel: ACPI: Interpreter enabled Nov 24 06:58:27.940658 kernel: ACPI: PM: (supports S0 S5) Nov 24 06:58:27.940666 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 06:58:27.940675 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 06:58:27.940684 kernel: PCI: Using E820 reservations for host bridge windows Nov 24 06:58:27.940693 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 24 06:58:27.940702 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 24 06:58:27.943069 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 24 06:58:27.943206 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 24 06:58:27.943301 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 24 06:58:27.943315 kernel: acpiphp: Slot [3] registered Nov 24 06:58:27.943324 kernel: acpiphp: Slot [4] registered Nov 24 06:58:27.943333 kernel: acpiphp: Slot [5] registered Nov 24 06:58:27.943343 kernel: acpiphp: Slot [6] registered Nov 24 06:58:27.943352 kernel: acpiphp: Slot [7] registered Nov 24 06:58:27.943361 kernel: acpiphp: Slot [8] registered Nov 24 06:58:27.943370 kernel: acpiphp: Slot [9] registered Nov 24 06:58:27.943384 kernel: acpiphp: Slot [10] registered Nov 24 06:58:27.943393 kernel: acpiphp: Slot [11] registered Nov 24 06:58:27.943406 kernel: acpiphp: Slot [12] registered Nov 24 06:58:27.943415 kernel: acpiphp: Slot [13] registered Nov 24 06:58:27.943424 kernel: acpiphp: Slot [14] registered Nov 24 06:58:27.943432 kernel: acpiphp: Slot [15] registered Nov 24 06:58:27.943441 kernel: acpiphp: Slot [16] registered Nov 24 06:58:27.943450 kernel: acpiphp: Slot [17] registered Nov 24 06:58:27.943459 kernel: acpiphp: Slot [18] registered Nov 24 06:58:27.943471 kernel: acpiphp: Slot [19] registered Nov 24 06:58:27.943480 kernel: acpiphp: Slot [20] registered Nov 24 06:58:27.943489 kernel: acpiphp: Slot [21] registered Nov 24 06:58:27.943498 kernel: acpiphp: Slot [22] registered Nov 24 06:58:27.943507 kernel: acpiphp: Slot [23] registered Nov 24 06:58:27.943516 kernel: acpiphp: Slot [24] registered Nov 24 06:58:27.943525 kernel: acpiphp: Slot [25] registered Nov 24 06:58:27.943533 kernel: acpiphp: Slot [26] registered Nov 24 06:58:27.943544 kernel: acpiphp: Slot [27] registered Nov 24 06:58:27.943553 kernel: acpiphp: Slot [28] registered Nov 24 06:58:27.943566 kernel: acpiphp: Slot [29] registered Nov 24 06:58:27.943574 kernel: acpiphp: Slot [30] registered Nov 24 06:58:27.943584 kernel: acpiphp: Slot [31] registered Nov 24 06:58:27.943592 kernel: PCI host bridge to bus 0000:00 Nov 24 06:58:27.943709 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 24 06:58:27.943811 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 24 06:58:27.943898 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 24 06:58:27.944015 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 24 06:58:27.944100 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 24 06:58:27.944182 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 24 06:58:27.944320 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 24 06:58:27.944436 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 24 06:58:27.944541 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Nov 24 06:58:27.944643 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Nov 24 06:58:27.946936 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 24 06:58:27.947115 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 24 06:58:27.947222 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 24 06:58:27.947318 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 24 06:58:27.947438 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Nov 24 06:58:27.947577 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Nov 24 06:58:27.947753 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 24 06:58:27.947896 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 24 06:58:27.948001 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 24 06:58:27.948144 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 24 06:58:27.948243 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Nov 24 06:58:27.948335 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Nov 24 06:58:27.948436 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Nov 24 06:58:27.948528 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Nov 24 06:58:27.948620 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 24 06:58:27.950850 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 24 06:58:27.951025 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Nov 24 06:58:27.951132 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Nov 24 06:58:27.951233 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Nov 24 06:58:27.951361 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 24 06:58:27.951460 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Nov 24 06:58:27.951560 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Nov 24 06:58:27.951661 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 24 06:58:27.951820 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 24 06:58:27.951928 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Nov 24 06:58:27.952029 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Nov 24 06:58:27.952139 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 24 06:58:27.952277 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 24 06:58:27.952380 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Nov 24 06:58:27.952479 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Nov 24 06:58:27.952576 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Nov 24 06:58:27.952695 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 24 06:58:27.953911 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Nov 24 06:58:27.954043 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Nov 24 06:58:27.954169 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Nov 24 06:58:27.954301 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Nov 24 06:58:27.954401 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Nov 24 06:58:27.954497 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 24 06:58:27.954509 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 24 06:58:27.954525 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 24 06:58:27.954535 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 24 06:58:27.954544 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 24 06:58:27.954553 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 24 06:58:27.954563 kernel: iommu: Default domain type: Translated Nov 24 06:58:27.954572 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 06:58:27.954581 kernel: PCI: Using ACPI for IRQ routing Nov 24 06:58:27.954590 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 24 06:58:27.954600 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 24 06:58:27.954613 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 24 06:58:27.954745 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 24 06:58:27.954902 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 24 06:58:27.955076 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 24 06:58:27.955092 kernel: vgaarb: loaded Nov 24 06:58:27.955102 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 24 06:58:27.955111 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 24 06:58:27.955121 kernel: clocksource: Switched to clocksource kvm-clock Nov 24 06:58:27.955130 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 06:58:27.955148 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 06:58:27.955157 kernel: pnp: PnP ACPI init Nov 24 06:58:27.955166 kernel: pnp: PnP ACPI: found 4 devices Nov 24 06:58:27.955175 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 06:58:27.955185 kernel: NET: Registered PF_INET protocol family Nov 24 06:58:27.955194 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 24 06:58:27.955204 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 24 06:58:27.955213 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 06:58:27.955222 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 24 06:58:27.955235 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 24 06:58:27.955245 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 24 06:58:27.955254 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 24 06:58:27.955263 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 24 06:58:27.955272 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 06:58:27.955281 kernel: NET: Registered PF_XDP protocol family Nov 24 06:58:27.955384 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 24 06:58:27.955473 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 24 06:58:27.955576 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 24 06:58:27.960326 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 24 06:58:27.960570 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 24 06:58:27.960687 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 24 06:58:27.960816 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 24 06:58:27.960832 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 24 06:58:27.960928 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 25715 usecs Nov 24 06:58:27.960941 kernel: PCI: CLS 0 bytes, default 64 Nov 24 06:58:27.960950 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 24 06:58:27.960970 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Nov 24 06:58:27.960980 kernel: Initialise system trusted keyrings Nov 24 06:58:27.960990 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 24 06:58:27.960999 kernel: Key type asymmetric registered Nov 24 06:58:27.961008 kernel: Asymmetric key parser 'x509' registered Nov 24 06:58:27.961018 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 06:58:27.961027 kernel: io scheduler mq-deadline registered Nov 24 06:58:27.961036 kernel: io scheduler kyber registered Nov 24 06:58:27.961049 kernel: io scheduler bfq registered Nov 24 06:58:27.961058 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 06:58:27.961067 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 24 06:58:27.961076 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 24 06:58:27.961085 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 24 06:58:27.961095 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 06:58:27.961103 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 06:58:27.961113 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 24 06:58:27.961122 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 24 06:58:27.961135 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 24 06:58:27.961313 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 24 06:58:27.961338 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 24 06:58:27.961437 kernel: rtc_cmos 00:03: registered as rtc0 Nov 24 06:58:27.961526 kernel: rtc_cmos 00:03: setting system clock to 2025-11-24T06:58:27 UTC (1763967507) Nov 24 06:58:27.961611 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 24 06:58:27.961623 kernel: intel_pstate: CPU model not supported Nov 24 06:58:27.961633 kernel: NET: Registered PF_INET6 protocol family Nov 24 06:58:27.961648 kernel: Segment Routing with IPv6 Nov 24 06:58:27.961657 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 06:58:27.961666 kernel: NET: Registered PF_PACKET protocol family Nov 24 06:58:27.961675 kernel: Key type dns_resolver registered Nov 24 06:58:27.961684 kernel: IPI shorthand broadcast: enabled Nov 24 06:58:27.961693 kernel: sched_clock: Marking stable (3257003677, 157017642)->(3442929373, -28908054) Nov 24 06:58:27.961702 kernel: registered taskstats version 1 Nov 24 06:58:27.961752 kernel: Loading compiled-in X.509 certificates Nov 24 06:58:27.961762 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 960cbe7f2b1ea74b5c881d6d42eea4d1ac19a607' Nov 24 06:58:27.961775 kernel: Demotion targets for Node 0: null Nov 24 06:58:27.961784 kernel: Key type .fscrypt registered Nov 24 06:58:27.961793 kernel: Key type fscrypt-provisioning registered Nov 24 06:58:27.961828 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 06:58:27.961840 kernel: ima: Allocated hash algorithm: sha1 Nov 24 06:58:27.961850 kernel: ima: No architecture policies found Nov 24 06:58:27.961859 kernel: clk: Disabling unused clocks Nov 24 06:58:27.961868 kernel: Warning: unable to open an initial console. Nov 24 06:58:27.961878 kernel: Freeing unused kernel image (initmem) memory: 46200K Nov 24 06:58:27.961891 kernel: Write protecting the kernel read-only data: 40960k Nov 24 06:58:27.961901 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 06:58:27.961911 kernel: Run /init as init process Nov 24 06:58:27.961920 kernel: with arguments: Nov 24 06:58:27.961931 kernel: /init Nov 24 06:58:27.961940 kernel: with environment: Nov 24 06:58:27.961949 kernel: HOME=/ Nov 24 06:58:27.961958 kernel: TERM=linux Nov 24 06:58:27.961970 systemd[1]: Successfully made /usr/ read-only. Nov 24 06:58:27.961987 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 06:58:27.961997 systemd[1]: Detected virtualization kvm. Nov 24 06:58:27.962007 systemd[1]: Detected architecture x86-64. Nov 24 06:58:27.962016 systemd[1]: Running in initrd. Nov 24 06:58:27.962026 systemd[1]: No hostname configured, using default hostname. Nov 24 06:58:27.962036 systemd[1]: Hostname set to . Nov 24 06:58:27.962046 systemd[1]: Initializing machine ID from VM UUID. Nov 24 06:58:27.962060 systemd[1]: Queued start job for default target initrd.target. Nov 24 06:58:27.962070 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 06:58:27.962080 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 06:58:27.962091 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 06:58:27.962101 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 06:58:27.962111 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 06:58:27.962125 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 06:58:27.962136 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 06:58:27.962147 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 06:58:27.962156 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 06:58:27.962166 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 06:58:27.962176 systemd[1]: Reached target paths.target - Path Units. Nov 24 06:58:27.962190 systemd[1]: Reached target slices.target - Slice Units. Nov 24 06:58:27.962200 systemd[1]: Reached target swap.target - Swaps. Nov 24 06:58:27.962210 systemd[1]: Reached target timers.target - Timer Units. Nov 24 06:58:27.962220 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 06:58:27.962230 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 06:58:27.962240 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 06:58:27.962250 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 06:58:27.962260 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 06:58:27.962275 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 06:58:27.962292 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 06:58:27.962302 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 06:58:27.962312 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 06:58:27.962322 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 06:58:27.962332 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 06:58:27.962342 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 06:58:27.962353 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 06:58:27.962363 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 06:58:27.962377 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 06:58:27.962387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:58:27.962401 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 06:58:27.962469 systemd-journald[192]: Collecting audit messages is disabled. Nov 24 06:58:27.962510 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 06:58:27.962526 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 06:58:27.962541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 06:58:27.962558 systemd-journald[192]: Journal started Nov 24 06:58:27.962595 systemd-journald[192]: Runtime Journal (/run/log/journal/360f36af650f4e949f750c2caf4ea46e) is 4.9M, max 39.2M, 34.3M free. Nov 24 06:58:27.967750 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 06:58:27.968831 systemd-modules-load[194]: Inserted module 'overlay' Nov 24 06:58:27.982990 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 06:58:28.008744 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 06:58:28.011129 systemd-modules-load[194]: Inserted module 'br_netfilter' Nov 24 06:58:28.074061 kernel: Bridge firewalling registered Nov 24 06:58:28.014006 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 06:58:28.022326 systemd-tmpfiles[206]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 06:58:28.077011 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 06:58:28.078334 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:58:28.079362 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 06:58:28.083693 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 06:58:28.089930 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 06:58:28.102099 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 06:58:28.117607 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 06:58:28.123035 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 06:58:28.126765 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 06:58:28.130943 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 06:58:28.136927 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 06:58:28.172209 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 06:58:28.192754 systemd-resolved[227]: Positive Trust Anchors: Nov 24 06:58:28.193570 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 06:58:28.193610 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 06:58:28.200109 systemd-resolved[227]: Defaulting to hostname 'linux'. Nov 24 06:58:28.202467 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 06:58:28.203745 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 06:58:28.305811 kernel: SCSI subsystem initialized Nov 24 06:58:28.320772 kernel: Loading iSCSI transport class v2.0-870. Nov 24 06:58:28.334843 kernel: iscsi: registered transport (tcp) Nov 24 06:58:28.359803 kernel: iscsi: registered transport (qla4xxx) Nov 24 06:58:28.359920 kernel: QLogic iSCSI HBA Driver Nov 24 06:58:28.385789 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 06:58:28.404564 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 06:58:28.405579 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 06:58:28.466675 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 06:58:28.469895 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 06:58:28.530767 kernel: raid6: avx2x4 gen() 17438 MB/s Nov 24 06:58:28.547764 kernel: raid6: avx2x2 gen() 17091 MB/s Nov 24 06:58:28.566031 kernel: raid6: avx2x1 gen() 12270 MB/s Nov 24 06:58:28.566136 kernel: raid6: using algorithm avx2x4 gen() 17438 MB/s Nov 24 06:58:28.584768 kernel: raid6: .... xor() 9556 MB/s, rmw enabled Nov 24 06:58:28.584868 kernel: raid6: using avx2x2 recovery algorithm Nov 24 06:58:28.607755 kernel: xor: automatically using best checksumming function avx Nov 24 06:58:28.793760 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 06:58:28.803686 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 06:58:28.806549 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 06:58:28.841118 systemd-udevd[440]: Using default interface naming scheme 'v255'. Nov 24 06:58:28.849785 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 06:58:28.854238 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 06:58:28.886886 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Nov 24 06:58:28.926203 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 06:58:28.928468 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 06:58:29.000298 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 06:58:29.004454 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 06:58:29.074742 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Nov 24 06:58:29.082747 kernel: scsi host0: Virtio SCSI HBA Nov 24 06:58:29.113020 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 06:58:29.130737 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 24 06:58:29.132745 kernel: libata version 3.00 loaded. Nov 24 06:58:29.142752 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 24 06:58:29.147890 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 24 06:58:29.163835 kernel: scsi host1: ata_piix Nov 24 06:58:29.168229 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 06:58:29.171047 kernel: scsi host2: ata_piix Nov 24 06:58:29.171130 kernel: AES CTR mode by8 optimization enabled Nov 24 06:58:29.168558 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:58:29.186413 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 24 06:58:29.186448 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 24 06:58:29.186461 kernel: GPT:9289727 != 125829119 Nov 24 06:58:29.186472 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 24 06:58:29.186494 kernel: GPT:9289727 != 125829119 Nov 24 06:58:29.186505 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 24 06:58:29.186517 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 06:58:29.186528 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Nov 24 06:58:29.186540 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Nov 24 06:58:29.186663 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:58:29.191144 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:58:29.194475 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 06:58:29.207815 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 24 06:58:29.210919 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 24 06:58:29.237663 kernel: ACPI: bus type USB registered Nov 24 06:58:29.237776 kernel: usbcore: registered new interface driver usbfs Nov 24 06:58:29.239745 kernel: usbcore: registered new interface driver hub Nov 24 06:58:29.242817 kernel: usbcore: registered new device driver usb Nov 24 06:58:29.302136 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:58:29.410750 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 24 06:58:29.416738 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 24 06:58:29.427757 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 24 06:58:29.438781 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 24 06:58:29.440989 kernel: hub 1-0:1.0: USB hub found Nov 24 06:58:29.441282 kernel: hub 1-0:1.0: 2 ports detected Nov 24 06:58:29.442170 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 24 06:58:29.459353 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 24 06:58:29.460588 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 06:58:29.475969 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 24 06:58:29.488744 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 24 06:58:29.489566 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 24 06:58:29.491237 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 06:58:29.492114 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 06:58:29.493237 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 06:58:29.495776 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 06:58:29.498965 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 06:58:29.515851 disk-uuid[595]: Primary Header is updated. Nov 24 06:58:29.515851 disk-uuid[595]: Secondary Entries is updated. Nov 24 06:58:29.515851 disk-uuid[595]: Secondary Header is updated. Nov 24 06:58:29.525775 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 06:58:29.532765 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 06:58:30.544048 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 06:58:30.545224 disk-uuid[596]: The operation has completed successfully. Nov 24 06:58:30.590800 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 06:58:30.591779 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 06:58:30.626346 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 06:58:30.654589 sh[614]: Success Nov 24 06:58:30.678485 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 06:58:30.678622 kernel: device-mapper: uevent: version 1.0.3 Nov 24 06:58:30.680734 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 06:58:30.691737 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Nov 24 06:58:30.751525 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 06:58:30.757860 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 06:58:30.776576 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 06:58:30.789762 kernel: BTRFS: device fsid 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (626) Nov 24 06:58:30.789855 kernel: BTRFS info (device dm-0): first mount of filesystem 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 Nov 24 06:58:30.792015 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 06:58:30.802994 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 06:58:30.803127 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 06:58:30.806409 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 06:58:30.807172 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 06:58:30.807955 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 06:58:30.808957 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 06:58:30.812143 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 06:58:30.849747 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Nov 24 06:58:30.852757 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:58:30.854825 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 06:58:30.860405 kernel: BTRFS info (device vda6): turning on async discard Nov 24 06:58:30.860522 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 06:58:30.868777 kernel: BTRFS info (device vda6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:58:30.871261 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 06:58:30.873426 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 06:58:30.981177 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 06:58:30.984991 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 06:58:31.041456 systemd-networkd[796]: lo: Link UP Nov 24 06:58:31.041468 systemd-networkd[796]: lo: Gained carrier Nov 24 06:58:31.044131 systemd-networkd[796]: Enumeration completed Nov 24 06:58:31.044753 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 06:58:31.045330 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 24 06:58:31.045353 systemd-networkd[796]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 24 06:58:31.046627 systemd[1]: Reached target network.target - Network. Nov 24 06:58:31.047119 systemd-networkd[796]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 06:58:31.047126 systemd-networkd[796]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 06:58:31.047511 systemd-networkd[796]: eth0: Link UP Nov 24 06:58:31.047699 systemd-networkd[796]: eth1: Link UP Nov 24 06:58:31.049451 systemd-networkd[796]: eth0: Gained carrier Nov 24 06:58:31.049466 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 24 06:58:31.055364 systemd-networkd[796]: eth1: Gained carrier Nov 24 06:58:31.055384 systemd-networkd[796]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 06:58:31.065989 systemd-networkd[796]: eth0: DHCPv4 address 164.90.155.191/20, gateway 164.90.144.1 acquired from 169.254.169.253 Nov 24 06:58:31.078910 systemd-networkd[796]: eth1: DHCPv4 address 10.124.0.8/20 acquired from 169.254.169.253 Nov 24 06:58:31.095708 ignition[702]: Ignition 2.22.0 Nov 24 06:58:31.096508 ignition[702]: Stage: fetch-offline Nov 24 06:58:31.096579 ignition[702]: no configs at "/usr/lib/ignition/base.d" Nov 24 06:58:31.096589 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 24 06:58:31.096742 ignition[702]: parsed url from cmdline: "" Nov 24 06:58:31.096748 ignition[702]: no config URL provided Nov 24 06:58:31.096780 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 06:58:31.096796 ignition[702]: no config at "/usr/lib/ignition/user.ign" Nov 24 06:58:31.100018 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 06:58:31.096804 ignition[702]: failed to fetch config: resource requires networking Nov 24 06:58:31.097083 ignition[702]: Ignition finished successfully Nov 24 06:58:31.102957 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 24 06:58:31.165606 ignition[805]: Ignition 2.22.0 Nov 24 06:58:31.165623 ignition[805]: Stage: fetch Nov 24 06:58:31.166634 ignition[805]: no configs at "/usr/lib/ignition/base.d" Nov 24 06:58:31.166649 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 24 06:58:31.166853 ignition[805]: parsed url from cmdline: "" Nov 24 06:58:31.166860 ignition[805]: no config URL provided Nov 24 06:58:31.166870 ignition[805]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 06:58:31.166880 ignition[805]: no config at "/usr/lib/ignition/user.ign" Nov 24 06:58:31.167448 ignition[805]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 24 06:58:31.204833 ignition[805]: GET result: OK Nov 24 06:58:31.205006 ignition[805]: parsing config with SHA512: 22e2815e1823b261bc026203c0333c43082402c14cad2dd046a66c753bce3cbfcea533275696e06516578494c2d1a0915cd6cf18426addd170d01bb3448f9bed Nov 24 06:58:31.210177 unknown[805]: fetched base config from "system" Nov 24 06:58:31.210196 unknown[805]: fetched base config from "system" Nov 24 06:58:31.210694 ignition[805]: fetch: fetch complete Nov 24 06:58:31.210205 unknown[805]: fetched user config from "digitalocean" Nov 24 06:58:31.210702 ignition[805]: fetch: fetch passed Nov 24 06:58:31.210808 ignition[805]: Ignition finished successfully Nov 24 06:58:31.214519 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 24 06:58:31.216892 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 06:58:31.261973 ignition[811]: Ignition 2.22.0 Nov 24 06:58:31.261991 ignition[811]: Stage: kargs Nov 24 06:58:31.262172 ignition[811]: no configs at "/usr/lib/ignition/base.d" Nov 24 06:58:31.262183 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 24 06:58:31.263011 ignition[811]: kargs: kargs passed Nov 24 06:58:31.265770 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 06:58:31.263074 ignition[811]: Ignition finished successfully Nov 24 06:58:31.269972 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 06:58:31.304838 ignition[817]: Ignition 2.22.0 Nov 24 06:58:31.304852 ignition[817]: Stage: disks Nov 24 06:58:31.305016 ignition[817]: no configs at "/usr/lib/ignition/base.d" Nov 24 06:58:31.305026 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 24 06:58:31.307671 ignition[817]: disks: disks passed Nov 24 06:58:31.307772 ignition[817]: Ignition finished successfully Nov 24 06:58:31.309272 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 06:58:31.310396 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 06:58:31.311245 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 06:58:31.312417 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 06:58:31.313523 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 06:58:31.314483 systemd[1]: Reached target basic.target - Basic System. Nov 24 06:58:31.316678 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 06:58:31.356430 systemd-fsck[826]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 24 06:58:31.363527 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 06:58:31.365495 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 06:58:31.493746 kernel: EXT4-fs (vda9): mounted filesystem f89e2a65-2a4a-426b-9659-02844cc29a2a r/w with ordered data mode. Quota mode: none. Nov 24 06:58:31.495251 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 06:58:31.497024 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 06:58:31.500127 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 06:58:31.501931 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 06:58:31.504917 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 24 06:58:31.508882 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 24 06:58:31.509458 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 06:58:31.509554 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 06:58:31.527351 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 06:58:31.535329 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (834) Nov 24 06:58:31.541924 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:58:31.542207 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 06:58:31.545786 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 06:58:31.553653 kernel: BTRFS info (device vda6): turning on async discard Nov 24 06:58:31.553758 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 06:58:31.557281 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 06:58:31.608947 coreos-metadata[836]: Nov 24 06:58:31.608 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 24 06:58:31.622762 coreos-metadata[836]: Nov 24 06:58:31.621 INFO Fetch successful Nov 24 06:58:31.628785 coreos-metadata[837]: Nov 24 06:58:31.628 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 24 06:58:31.632600 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 24 06:58:31.633609 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 24 06:58:31.634690 initrd-setup-root[866]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 06:58:31.641480 coreos-metadata[837]: Nov 24 06:58:31.641 INFO Fetch successful Nov 24 06:58:31.643104 initrd-setup-root[873]: cut: /sysroot/etc/group: No such file or directory Nov 24 06:58:31.649781 coreos-metadata[837]: Nov 24 06:58:31.649 INFO wrote hostname ci-4459.2.1-c-f92aac29d7 to /sysroot/etc/hostname Nov 24 06:58:31.651230 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 24 06:58:31.652542 initrd-setup-root[880]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 06:58:31.658331 initrd-setup-root[888]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 06:58:31.785645 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 06:58:31.787572 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 06:58:31.788826 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 06:58:31.809733 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 06:58:31.813112 kernel: BTRFS info (device vda6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:58:31.834377 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 06:58:31.859922 ignition[957]: INFO : Ignition 2.22.0 Nov 24 06:58:31.859922 ignition[957]: INFO : Stage: mount Nov 24 06:58:31.861317 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 06:58:31.861317 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 24 06:58:31.862805 ignition[957]: INFO : mount: mount passed Nov 24 06:58:31.862805 ignition[957]: INFO : Ignition finished successfully Nov 24 06:58:31.864810 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 06:58:31.866839 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 06:58:31.889775 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 06:58:31.923770 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (968) Nov 24 06:58:31.927742 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:58:31.929735 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 06:58:31.934227 kernel: BTRFS info (device vda6): turning on async discard Nov 24 06:58:31.934323 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 06:58:31.937045 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 06:58:31.987708 ignition[985]: INFO : Ignition 2.22.0 Nov 24 06:58:31.987708 ignition[985]: INFO : Stage: files Nov 24 06:58:31.989355 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 06:58:31.989355 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 24 06:58:31.990618 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Nov 24 06:58:31.991443 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 06:58:31.991443 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 06:58:31.995263 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 06:58:31.996453 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 06:58:31.997499 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 06:58:31.996574 unknown[985]: wrote ssh authorized keys file for user: core Nov 24 06:58:31.999665 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 06:58:31.999665 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 24 06:58:32.111660 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 06:58:32.133019 systemd-networkd[796]: eth1: Gained IPv6LL Nov 24 06:58:32.274851 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 06:58:32.276136 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 24 06:58:32.276136 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 06:58:32.276136 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 06:58:32.276136 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 06:58:32.276136 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 06:58:32.276136 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 06:58:32.276136 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 06:58:32.276136 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 06:58:32.287262 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 06:58:32.287262 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 06:58:32.287262 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 24 06:58:32.287262 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 24 06:58:32.287262 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 24 06:58:32.287262 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 24 06:58:32.324993 systemd-networkd[796]: eth0: Gained IPv6LL Nov 24 06:58:32.753272 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 24 06:58:33.094113 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 24 06:58:33.094113 ignition[985]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 24 06:58:33.096448 ignition[985]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 06:58:33.097242 ignition[985]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 06:58:33.097242 ignition[985]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 24 06:58:33.097242 ignition[985]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 24 06:58:33.097242 ignition[985]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 06:58:33.097242 ignition[985]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 06:58:33.103515 ignition[985]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 06:58:33.103515 ignition[985]: INFO : files: files passed Nov 24 06:58:33.103515 ignition[985]: INFO : Ignition finished successfully Nov 24 06:58:33.100260 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 06:58:33.103235 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 06:58:33.106902 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 06:58:33.122309 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 06:58:33.123162 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 06:58:33.134319 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 06:58:33.134319 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 06:58:33.137113 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 06:58:33.139393 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 06:58:33.141477 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 06:58:33.143146 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 06:58:33.211366 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 06:58:33.211528 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 06:58:33.213048 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 06:58:33.213670 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 06:58:33.214866 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 06:58:33.216014 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 06:58:33.244338 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 06:58:33.247242 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 06:58:33.274282 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 06:58:33.275101 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 06:58:33.276351 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 06:58:33.277373 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 06:58:33.277680 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 06:58:33.279103 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 06:58:33.280235 systemd[1]: Stopped target basic.target - Basic System. Nov 24 06:58:33.281344 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 06:58:33.282354 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 06:58:33.283513 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 06:58:33.284510 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 06:58:33.285655 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 06:58:33.286774 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 06:58:33.287815 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 06:58:33.288636 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 06:58:33.289491 systemd[1]: Stopped target swap.target - Swaps. Nov 24 06:58:33.290351 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 06:58:33.290552 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 06:58:33.291551 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 06:58:33.292217 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 06:58:33.293025 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 06:58:33.293203 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 06:58:33.294024 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 06:58:33.294229 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 06:58:33.295295 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 06:58:33.295467 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 06:58:33.296692 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 06:58:33.296890 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 06:58:33.297540 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 24 06:58:33.297657 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 24 06:58:33.300842 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 06:58:33.304018 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 06:58:33.304522 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 06:58:33.304757 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 06:58:33.306437 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 06:58:33.307850 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 06:58:33.314083 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 06:58:33.314242 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 06:58:33.341003 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 06:58:33.350301 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 06:58:33.350522 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 06:58:33.358055 ignition[1039]: INFO : Ignition 2.22.0 Nov 24 06:58:33.358055 ignition[1039]: INFO : Stage: umount Nov 24 06:58:33.359665 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 06:58:33.359665 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 24 06:58:33.361949 ignition[1039]: INFO : umount: umount passed Nov 24 06:58:33.361949 ignition[1039]: INFO : Ignition finished successfully Nov 24 06:58:33.362905 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 06:58:33.363419 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 06:58:33.364806 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 06:58:33.364879 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 06:58:33.365417 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 06:58:33.365464 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 06:58:33.366192 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 24 06:58:33.366232 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 24 06:58:33.367217 systemd[1]: Stopped target network.target - Network. Nov 24 06:58:33.368031 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 06:58:33.368108 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 06:58:33.368915 systemd[1]: Stopped target paths.target - Path Units. Nov 24 06:58:33.369650 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 06:58:33.372861 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 06:58:33.373565 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 06:58:33.374611 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 06:58:33.375526 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 06:58:33.375603 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 06:58:33.376401 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 06:58:33.376453 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 06:58:33.377188 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 06:58:33.377271 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 06:58:33.378066 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 06:58:33.378119 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 06:58:33.378988 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 06:58:33.379066 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 06:58:33.380213 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 06:58:33.381012 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 06:58:33.388302 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 06:58:33.388458 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 06:58:33.393135 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 06:58:33.393480 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 06:58:33.393537 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 06:58:33.397482 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 06:58:33.398487 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 06:58:33.398621 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 06:58:33.400451 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 06:58:33.401387 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 06:58:33.402112 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 06:58:33.402169 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 06:58:33.404329 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 06:58:33.405994 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 06:58:33.406072 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 06:58:33.407178 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 06:58:33.407257 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 06:58:33.410010 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 06:58:33.410083 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 06:58:33.410676 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 06:58:33.416491 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 06:58:33.431209 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 06:58:33.431433 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 06:58:33.434477 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 06:58:33.434616 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 06:58:33.435639 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 06:58:33.435699 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 06:58:33.439751 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 06:58:33.439843 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 06:58:33.440926 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 06:58:33.441001 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 06:58:33.442569 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 06:58:33.442667 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 06:58:33.447098 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 06:58:33.449166 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 06:58:33.449307 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 06:58:33.453329 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 06:58:33.453438 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 06:58:33.455401 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 24 06:58:33.455494 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 06:58:33.456670 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 06:58:33.456775 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 06:58:33.457957 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 06:58:33.458040 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:58:33.460757 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 06:58:33.460932 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 06:58:33.473831 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 06:58:33.474012 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 06:58:33.475405 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 06:58:33.477444 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 06:58:33.513924 systemd[1]: Switching root. Nov 24 06:58:33.554393 systemd-journald[192]: Journal stopped Nov 24 06:58:34.931655 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Nov 24 06:58:34.931775 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 06:58:34.931842 kernel: SELinux: policy capability open_perms=1 Nov 24 06:58:34.931860 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 06:58:34.931876 kernel: SELinux: policy capability always_check_network=0 Nov 24 06:58:34.931893 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 06:58:34.931909 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 06:58:34.931922 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 06:58:34.931934 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 06:58:34.931946 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 06:58:34.931959 kernel: audit: type=1403 audit(1763967513.761:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 06:58:34.931983 systemd[1]: Successfully loaded SELinux policy in 73.986ms. Nov 24 06:58:34.932013 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.499ms. Nov 24 06:58:34.932029 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 06:58:34.932044 systemd[1]: Detected virtualization kvm. Nov 24 06:58:34.932056 systemd[1]: Detected architecture x86-64. Nov 24 06:58:34.932069 systemd[1]: Detected first boot. Nov 24 06:58:34.932082 systemd[1]: Hostname set to . Nov 24 06:58:34.932115 systemd[1]: Initializing machine ID from VM UUID. Nov 24 06:58:34.932140 zram_generator::config[1082]: No configuration found. Nov 24 06:58:34.932160 kernel: Guest personality initialized and is inactive Nov 24 06:58:34.932172 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Nov 24 06:58:34.932184 kernel: Initialized host personality Nov 24 06:58:34.932206 kernel: NET: Registered PF_VSOCK protocol family Nov 24 06:58:34.932218 systemd[1]: Populated /etc with preset unit settings. Nov 24 06:58:34.932234 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 06:58:34.932246 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 06:58:34.932262 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 06:58:34.932275 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 06:58:34.932289 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 06:58:34.932307 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 06:58:34.932319 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 06:58:34.932332 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 06:58:34.932345 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 06:58:34.932359 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 06:58:34.932376 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 06:58:34.932392 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 06:58:34.932405 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 06:58:34.932418 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 06:58:34.932432 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 06:58:34.932444 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 06:58:34.932458 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 06:58:34.932475 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 06:58:34.932488 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 06:58:34.932508 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 06:58:34.932522 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 06:58:34.932534 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 06:58:34.932548 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 06:58:34.932560 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 06:58:34.932573 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 06:58:34.932585 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 06:58:34.932601 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 06:58:34.932614 systemd[1]: Reached target slices.target - Slice Units. Nov 24 06:58:34.932626 systemd[1]: Reached target swap.target - Swaps. Nov 24 06:58:34.932639 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 06:58:34.932652 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 06:58:34.932664 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 06:58:34.932677 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 06:58:34.932690 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 06:58:34.932703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 06:58:34.933854 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 06:58:34.933897 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 06:58:34.933911 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 06:58:34.933925 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 06:58:34.933938 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:58:34.933951 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 06:58:34.933963 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 06:58:34.933991 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 06:58:34.934006 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 06:58:34.934024 systemd[1]: Reached target machines.target - Containers. Nov 24 06:58:34.934036 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 06:58:34.934049 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 06:58:34.934061 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 06:58:34.934074 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 06:58:34.934087 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 06:58:34.934100 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 06:58:34.934114 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 06:58:34.934130 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 06:58:34.934143 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 06:58:34.934156 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 06:58:34.934168 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 06:58:34.934181 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 06:58:34.934194 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 06:58:34.934206 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 06:58:34.934219 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 06:58:34.934236 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 06:58:34.934249 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 06:58:34.934265 kernel: fuse: init (API version 7.41) Nov 24 06:58:34.934285 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 06:58:34.934298 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 06:58:34.934311 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 06:58:34.934328 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 06:58:34.934341 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 06:58:34.934354 systemd[1]: Stopped verity-setup.service. Nov 24 06:58:34.934368 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:58:34.934380 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 06:58:34.934396 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 06:58:34.934409 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 06:58:34.934422 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 06:58:34.934434 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 06:58:34.934447 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 06:58:34.934459 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 06:58:34.934472 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 06:58:34.934484 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 06:58:34.934497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 06:58:34.934514 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 06:58:34.934527 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 06:58:34.934539 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 06:58:34.934551 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 06:58:34.934570 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 06:58:34.934582 kernel: ACPI: bus type drm_connector registered Nov 24 06:58:34.934594 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 06:58:34.934607 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 06:58:34.934619 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 06:58:34.934636 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 06:58:34.934649 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 06:58:34.934661 kernel: loop: module loaded Nov 24 06:58:34.934780 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 06:58:34.934796 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 06:58:34.934814 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 06:58:34.934826 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 06:58:34.934838 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 06:58:34.934907 systemd-journald[1155]: Collecting audit messages is disabled. Nov 24 06:58:34.934946 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 06:58:34.934959 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 06:58:34.934972 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 06:58:34.934986 systemd-journald[1155]: Journal started Nov 24 06:58:34.935013 systemd-journald[1155]: Runtime Journal (/run/log/journal/360f36af650f4e949f750c2caf4ea46e) is 4.9M, max 39.2M, 34.3M free. Nov 24 06:58:34.528736 systemd[1]: Queued start job for default target multi-user.target. Nov 24 06:58:34.539148 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 24 06:58:34.539776 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 06:58:34.937742 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 06:58:34.945888 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 06:58:34.945963 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 06:58:34.954856 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 06:58:34.964840 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 06:58:34.972348 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 06:58:34.976902 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 06:58:34.980519 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 06:58:34.982244 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 06:58:34.983522 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 06:58:34.985055 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 06:58:34.986140 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 06:58:35.031032 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 06:58:35.032439 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 06:58:35.034452 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 06:58:35.037442 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 06:58:35.044566 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 06:58:35.053606 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 06:58:35.083527 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 06:58:35.098662 kernel: loop0: detected capacity change from 0 to 110984 Nov 24 06:58:35.092928 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 06:58:35.107556 systemd-journald[1155]: Time spent on flushing to /var/log/journal/360f36af650f4e949f750c2caf4ea46e is 38.850ms for 1015 entries. Nov 24 06:58:35.107556 systemd-journald[1155]: System Journal (/var/log/journal/360f36af650f4e949f750c2caf4ea46e) is 8M, max 195.6M, 187.6M free. Nov 24 06:58:35.156988 systemd-journald[1155]: Received client request to flush runtime journal. Nov 24 06:58:35.157089 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 06:58:35.121939 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 06:58:35.149345 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Nov 24 06:58:35.149361 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Nov 24 06:58:35.161087 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 06:58:35.162283 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 06:58:35.162996 kernel: loop1: detected capacity change from 0 to 219144 Nov 24 06:58:35.167935 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 06:58:35.282044 kernel: loop2: detected capacity change from 0 to 8 Nov 24 06:58:35.327872 kernel: loop3: detected capacity change from 0 to 128560 Nov 24 06:58:35.433399 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 06:58:35.448081 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 06:58:35.504763 kernel: loop4: detected capacity change from 0 to 110984 Nov 24 06:58:35.539362 kernel: loop5: detected capacity change from 0 to 219144 Nov 24 06:58:35.542081 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 06:58:35.549425 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Nov 24 06:58:35.549455 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Nov 24 06:58:35.561226 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 06:58:35.570763 kernel: loop6: detected capacity change from 0 to 8 Nov 24 06:58:35.575932 kernel: loop7: detected capacity change from 0 to 128560 Nov 24 06:58:35.589320 (sd-merge)[1233]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 24 06:58:35.590058 (sd-merge)[1233]: Merged extensions into '/usr'. Nov 24 06:58:35.604494 systemd[1]: Reload requested from client PID 1189 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 06:58:35.604522 systemd[1]: Reloading... Nov 24 06:58:35.775755 zram_generator::config[1261]: No configuration found. Nov 24 06:58:35.981781 ldconfig[1182]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 06:58:36.051138 systemd[1]: Reloading finished in 445 ms. Nov 24 06:58:36.082990 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 06:58:36.088066 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 06:58:36.098946 systemd[1]: Starting ensure-sysext.service... Nov 24 06:58:36.101045 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 06:58:36.139410 systemd[1]: Reload requested from client PID 1305 ('systemctl') (unit ensure-sysext.service)... Nov 24 06:58:36.139426 systemd[1]: Reloading... Nov 24 06:58:36.160880 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 06:58:36.160914 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 06:58:36.161364 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 06:58:36.161687 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 06:58:36.165061 systemd-tmpfiles[1306]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 06:58:36.165727 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Nov 24 06:58:36.167895 systemd-tmpfiles[1306]: ACLs are not supported, ignoring. Nov 24 06:58:36.176616 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 06:58:36.176632 systemd-tmpfiles[1306]: Skipping /boot Nov 24 06:58:36.199881 systemd-tmpfiles[1306]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 06:58:36.199896 systemd-tmpfiles[1306]: Skipping /boot Nov 24 06:58:36.266743 zram_generator::config[1333]: No configuration found. Nov 24 06:58:36.550058 systemd[1]: Reloading finished in 410 ms. Nov 24 06:58:36.572993 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 06:58:36.574204 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 06:58:36.589988 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 06:58:36.593444 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 06:58:36.598076 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 06:58:36.607018 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 06:58:36.611159 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 06:58:36.615226 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 06:58:36.623056 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:58:36.623257 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 06:58:36.627094 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 06:58:36.632070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 06:58:36.642139 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 06:58:36.642833 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 06:58:36.642971 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 06:58:36.643073 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:58:36.651225 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 06:58:36.654359 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:58:36.655055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 06:58:36.655228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 06:58:36.655312 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 06:58:36.655401 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:58:36.660340 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:58:36.660588 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 06:58:36.669667 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 06:58:36.671406 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 06:58:36.671573 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 06:58:36.671753 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:58:36.680209 systemd[1]: Finished ensure-sysext.service. Nov 24 06:58:36.682123 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 06:58:36.693504 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 24 06:58:36.702455 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 06:58:36.716537 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 06:58:36.732215 systemd-udevd[1382]: Using default interface naming scheme 'v255'. Nov 24 06:58:36.732858 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 06:58:36.734014 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 06:58:36.746002 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 06:58:36.749244 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 06:58:36.775358 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 06:58:36.776293 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 06:58:36.777500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 06:58:36.780351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 06:58:36.783447 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 06:58:36.791358 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 06:58:36.792916 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 06:58:36.793350 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 06:58:36.801121 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 06:58:36.801339 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 06:58:36.802494 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 06:58:36.870043 augenrules[1445]: No rules Nov 24 06:58:36.873949 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 06:58:36.874305 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 06:58:36.898267 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 06:58:36.985497 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 06:58:36.993308 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Nov 24 06:58:36.996345 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 24 06:58:36.998360 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:58:36.998538 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 06:58:37.001225 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 06:58:37.004285 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 06:58:37.012029 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 06:58:37.031655 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 06:58:37.032825 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 06:58:37.032868 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 06:58:37.032887 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:58:37.078798 kernel: ISO 9660 Extensions: RRIP_1991A Nov 24 06:58:37.082404 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 06:58:37.083012 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 06:58:37.083944 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 06:58:37.086133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 06:58:37.086315 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 06:58:37.089411 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 06:58:37.091104 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 06:58:37.107761 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 24 06:58:37.117415 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 06:58:37.202813 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 06:58:37.255754 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 24 06:58:37.256462 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 06:58:37.279425 systemd-resolved[1381]: Positive Trust Anchors: Nov 24 06:58:37.280754 systemd-resolved[1381]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 06:58:37.280900 systemd-resolved[1381]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 06:58:37.290483 systemd-resolved[1381]: Using system hostname 'ci-4459.2.1-c-f92aac29d7'. Nov 24 06:58:37.292114 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 06:58:37.293901 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 06:58:37.294422 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 06:58:37.295013 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 06:58:37.295593 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 06:58:37.297077 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 06:58:37.297696 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 06:58:37.297917 systemd-networkd[1416]: lo: Link UP Nov 24 06:58:37.297924 systemd-networkd[1416]: lo: Gained carrier Nov 24 06:58:37.298256 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 06:58:37.299339 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 06:58:37.299891 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 06:58:37.299926 systemd[1]: Reached target paths.target - Path Units. Nov 24 06:58:37.300789 systemd[1]: Reached target timers.target - Timer Units. Nov 24 06:58:37.303124 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 06:58:37.303126 systemd-networkd[1416]: Enumeration completed Nov 24 06:58:37.304650 systemd-networkd[1416]: eth0: Configuring with /run/systemd/network/10-fa:30:45:0f:b1:1e.network. Nov 24 06:58:37.306366 systemd-networkd[1416]: eth0: Link UP Nov 24 06:58:37.306411 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 06:58:37.306586 systemd-networkd[1416]: eth0: Gained carrier Nov 24 06:58:37.314225 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Nov 24 06:58:37.315177 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 06:58:37.317448 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 06:58:37.318622 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 06:58:37.327529 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 24 06:58:37.326481 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 06:58:37.327106 systemd-networkd[1416]: eth1: Configuring with /run/systemd/network/10-4a:1c:66:57:99:ea.network. Nov 24 06:58:37.327665 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 06:58:37.329105 systemd-networkd[1416]: eth1: Link UP Nov 24 06:58:37.330007 systemd-networkd[1416]: eth1: Gained carrier Nov 24 06:58:37.330113 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 06:58:37.330978 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Nov 24 06:58:37.331246 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 06:58:37.333405 systemd[1]: Reached target network.target - Network. Nov 24 06:58:37.335043 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Nov 24 06:58:37.335122 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 06:58:37.335988 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Nov 24 06:58:37.336143 systemd[1]: Reached target basic.target - Basic System. Nov 24 06:58:37.337205 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 06:58:37.337235 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 06:58:37.339954 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 06:58:37.346242 kernel: ACPI: button: Power Button [PWRF] Nov 24 06:58:37.343575 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 24 06:58:37.347245 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 06:58:37.350946 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 06:58:37.355650 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 06:58:37.365021 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 06:58:37.366040 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 06:58:37.369676 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 06:58:37.376092 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 06:58:37.380063 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 06:58:37.388033 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 06:58:37.394373 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 06:58:37.396869 jq[1487]: false Nov 24 06:58:37.408620 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 06:58:37.415364 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 06:58:37.419212 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 06:58:37.421800 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 06:58:37.422465 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 06:58:37.428997 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 06:58:37.435451 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 06:58:37.441749 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Refreshing passwd entry cache Nov 24 06:58:37.441074 oslogin_cache_refresh[1489]: Refreshing passwd entry cache Nov 24 06:58:37.448684 extend-filesystems[1488]: Found /dev/vda6 Nov 24 06:58:37.446790 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 06:58:37.447708 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 06:58:37.447959 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 06:58:37.454308 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 06:58:37.455095 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 06:58:37.462738 extend-filesystems[1488]: Found /dev/vda9 Nov 24 06:58:37.482287 extend-filesystems[1488]: Checking size of /dev/vda9 Nov 24 06:58:37.482925 coreos-metadata[1484]: Nov 24 06:58:37.477 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 24 06:58:37.474585 oslogin_cache_refresh[1489]: Failure getting users, quitting Nov 24 06:58:37.471973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 24 06:58:37.483364 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Failure getting users, quitting Nov 24 06:58:37.483364 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 06:58:37.483364 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Refreshing group entry cache Nov 24 06:58:37.474625 oslogin_cache_refresh[1489]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 06:58:37.479814 oslogin_cache_refresh[1489]: Refreshing group entry cache Nov 24 06:58:37.485645 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 06:58:37.496903 extend-filesystems[1488]: Resized partition /dev/vda9 Nov 24 06:58:37.507391 coreos-metadata[1484]: Nov 24 06:58:37.490 INFO Fetch successful Nov 24 06:58:37.487177 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 06:58:37.498065 oslogin_cache_refresh[1489]: Failure getting groups, quitting Nov 24 06:58:37.507549 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Failure getting groups, quitting Nov 24 06:58:37.507549 google_oslogin_nss_cache[1489]: oslogin_cache_refresh[1489]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 06:58:37.507615 extend-filesystems[1526]: resize2fs 1.47.3 (8-Jul-2025) Nov 24 06:58:37.511655 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 24 06:58:37.511688 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 24 06:58:37.488820 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 06:58:37.498079 oslogin_cache_refresh[1489]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 06:58:37.500939 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 06:58:37.503990 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 06:58:37.557931 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 06:58:37.569630 tar[1511]: linux-amd64/LICENSE Nov 24 06:58:37.570561 tar[1511]: linux-amd64/helm Nov 24 06:58:37.572992 jq[1504]: true Nov 24 06:58:37.579811 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 06:58:37.591285 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 24 06:58:37.601792 jq[1539]: true Nov 24 06:58:37.612318 update_engine[1503]: I20251124 06:58:37.612207 1503 main.cc:92] Flatcar Update Engine starting Nov 24 06:58:37.617876 (ntainerd)[1532]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 06:58:37.632338 dbus-daemon[1485]: [system] SELinux support is enabled Nov 24 06:58:37.632610 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 06:58:37.636726 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 06:58:37.637102 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 06:58:37.637870 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 06:58:37.637956 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 24 06:58:37.637977 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 06:58:37.668569 systemd[1]: Started update-engine.service - Update Engine. Nov 24 06:58:37.671961 update_engine[1503]: I20251124 06:58:37.669708 1503 update_check_scheduler.cc:74] Next update check in 10m27s Nov 24 06:58:37.684754 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 24 06:58:37.699684 extend-filesystems[1526]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 24 06:58:37.699684 extend-filesystems[1526]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 24 06:58:37.699684 extend-filesystems[1526]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 24 06:58:37.713601 extend-filesystems[1488]: Resized filesystem in /dev/vda9 Nov 24 06:58:37.758562 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 06:58:37.762043 bash[1560]: Updated "/home/core/.ssh/authorized_keys" Nov 24 06:58:37.760542 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 06:58:37.760974 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 06:58:37.762843 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 06:58:37.764115 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 24 06:58:37.771372 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 06:58:37.777092 systemd[1]: Starting sshkeys.service... Nov 24 06:58:37.825273 systemd-logind[1496]: New seat seat0. Nov 24 06:58:37.828326 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 06:58:37.857159 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 24 06:58:37.863931 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 24 06:58:37.937312 coreos-metadata[1575]: Nov 24 06:58:37.935 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 24 06:58:37.960026 coreos-metadata[1575]: Nov 24 06:58:37.958 INFO Fetch successful Nov 24 06:58:37.973822 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 24 06:58:37.985395 unknown[1575]: wrote ssh authorized keys file for user: core Nov 24 06:58:38.042798 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 24 06:58:38.063533 update-ssh-keys[1587]: Updated "/home/core/.ssh/authorized_keys" Nov 24 06:58:38.064926 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 24 06:58:38.168012 containerd[1532]: time="2025-11-24T06:58:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 06:58:38.173751 containerd[1532]: time="2025-11-24T06:58:38.172362268Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 06:58:38.211153 kernel: Console: switching to colour dummy device 80x25 Nov 24 06:58:38.216183 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 24 06:58:38.216262 kernel: [drm] features: -context_init Nov 24 06:58:38.217673 systemd[1]: Finished sshkeys.service. Nov 24 06:58:38.228508 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:58:38.237942 kernel: [drm] number of scanouts: 1 Nov 24 06:58:38.238035 kernel: [drm] number of cap sets: 0 Nov 24 06:58:38.245751 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Nov 24 06:58:38.256099 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 24 06:58:38.256172 kernel: Console: switching to colour frame buffer device 128x48 Nov 24 06:58:38.263742 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 24 06:58:38.264223 containerd[1532]: time="2025-11-24T06:58:38.264181754Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.554µs" Nov 24 06:58:38.265220 containerd[1532]: time="2025-11-24T06:58:38.264317860Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 06:58:38.265220 containerd[1532]: time="2025-11-24T06:58:38.264356296Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 06:58:38.265220 containerd[1532]: time="2025-11-24T06:58:38.264570725Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 06:58:38.265220 containerd[1532]: time="2025-11-24T06:58:38.264599885Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 06:58:38.265220 containerd[1532]: time="2025-11-24T06:58:38.264639453Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 06:58:38.265220 containerd[1532]: time="2025-11-24T06:58:38.264727551Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 06:58:38.265220 containerd[1532]: time="2025-11-24T06:58:38.264742205Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 06:58:38.286396 containerd[1532]: time="2025-11-24T06:58:38.285111304Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 06:58:38.286396 containerd[1532]: time="2025-11-24T06:58:38.285150459Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 06:58:38.286396 containerd[1532]: time="2025-11-24T06:58:38.285165010Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 06:58:38.286396 containerd[1532]: time="2025-11-24T06:58:38.285178899Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 06:58:38.286396 containerd[1532]: time="2025-11-24T06:58:38.285366815Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 06:58:38.286396 containerd[1532]: time="2025-11-24T06:58:38.285764844Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 06:58:38.286396 containerd[1532]: time="2025-11-24T06:58:38.285865651Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 06:58:38.286396 containerd[1532]: time="2025-11-24T06:58:38.285877833Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 06:58:38.286396 containerd[1532]: time="2025-11-24T06:58:38.285920438Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 06:58:38.286396 containerd[1532]: time="2025-11-24T06:58:38.286156064Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 06:58:38.286396 containerd[1532]: time="2025-11-24T06:58:38.286251175Z" level=info msg="metadata content store policy set" policy=shared Nov 24 06:58:38.293742 containerd[1532]: time="2025-11-24T06:58:38.293310452Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 06:58:38.293742 containerd[1532]: time="2025-11-24T06:58:38.293396966Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 06:58:38.293742 containerd[1532]: time="2025-11-24T06:58:38.293420086Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 06:58:38.293742 containerd[1532]: time="2025-11-24T06:58:38.293442166Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 06:58:38.293742 containerd[1532]: time="2025-11-24T06:58:38.293457357Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 06:58:38.293742 containerd[1532]: time="2025-11-24T06:58:38.293475502Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 06:58:38.293742 containerd[1532]: time="2025-11-24T06:58:38.293486997Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 06:58:38.293742 containerd[1532]: time="2025-11-24T06:58:38.293498704Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 06:58:38.293742 containerd[1532]: time="2025-11-24T06:58:38.293509948Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 06:58:38.293742 containerd[1532]: time="2025-11-24T06:58:38.293521578Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 06:58:38.293742 containerd[1532]: time="2025-11-24T06:58:38.293540405Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 06:58:38.293742 containerd[1532]: time="2025-11-24T06:58:38.293557012Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 06:58:38.294134 containerd[1532]: time="2025-11-24T06:58:38.293784056Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 06:58:38.294134 containerd[1532]: time="2025-11-24T06:58:38.293834693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 06:58:38.294134 containerd[1532]: time="2025-11-24T06:58:38.293855649Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 06:58:38.294134 containerd[1532]: time="2025-11-24T06:58:38.293873584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 06:58:38.294134 containerd[1532]: time="2025-11-24T06:58:38.293899663Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 06:58:38.294134 containerd[1532]: time="2025-11-24T06:58:38.293910805Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 06:58:38.294134 containerd[1532]: time="2025-11-24T06:58:38.293922571Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 06:58:38.294134 containerd[1532]: time="2025-11-24T06:58:38.293932364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 06:58:38.294134 containerd[1532]: time="2025-11-24T06:58:38.293944184Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 06:58:38.294134 containerd[1532]: time="2025-11-24T06:58:38.293954976Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 06:58:38.294134 containerd[1532]: time="2025-11-24T06:58:38.293993613Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 06:58:38.294134 containerd[1532]: time="2025-11-24T06:58:38.294063401Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 06:58:38.294134 containerd[1532]: time="2025-11-24T06:58:38.294078401Z" level=info msg="Start snapshots syncer" Nov 24 06:58:38.294134 containerd[1532]: time="2025-11-24T06:58:38.294117192Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 06:58:38.294933 containerd[1532]: time="2025-11-24T06:58:38.294558921Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 06:58:38.294933 containerd[1532]: time="2025-11-24T06:58:38.294657129Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 06:58:38.296289 containerd[1532]: time="2025-11-24T06:58:38.296153600Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 06:58:38.296502 containerd[1532]: time="2025-11-24T06:58:38.296451060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 06:58:38.296502 containerd[1532]: time="2025-11-24T06:58:38.296497288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 06:58:38.296567 containerd[1532]: time="2025-11-24T06:58:38.296548391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 06:58:38.296608 containerd[1532]: time="2025-11-24T06:58:38.296586459Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 06:58:38.296608 containerd[1532]: time="2025-11-24T06:58:38.296605194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 06:58:38.296674 containerd[1532]: time="2025-11-24T06:58:38.296621404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 06:58:38.296674 containerd[1532]: time="2025-11-24T06:58:38.296636272Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 06:58:38.296741 containerd[1532]: time="2025-11-24T06:58:38.296672515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 06:58:38.296741 containerd[1532]: time="2025-11-24T06:58:38.296687869Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 06:58:38.296741 containerd[1532]: time="2025-11-24T06:58:38.296702218Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 06:58:38.300427 containerd[1532]: time="2025-11-24T06:58:38.297870319Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 06:58:38.300427 containerd[1532]: time="2025-11-24T06:58:38.297982386Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 06:58:38.300427 containerd[1532]: time="2025-11-24T06:58:38.297997852Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 06:58:38.300427 containerd[1532]: time="2025-11-24T06:58:38.298010915Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 06:58:38.300427 containerd[1532]: time="2025-11-24T06:58:38.298019520Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 06:58:38.300427 containerd[1532]: time="2025-11-24T06:58:38.298039954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 06:58:38.300427 containerd[1532]: time="2025-11-24T06:58:38.298061079Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 06:58:38.300427 containerd[1532]: time="2025-11-24T06:58:38.298078685Z" level=info msg="runtime interface created" Nov 24 06:58:38.300427 containerd[1532]: time="2025-11-24T06:58:38.298084398Z" level=info msg="created NRI interface" Nov 24 06:58:38.300427 containerd[1532]: time="2025-11-24T06:58:38.298092508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 06:58:38.300427 containerd[1532]: time="2025-11-24T06:58:38.298110316Z" level=info msg="Connect containerd service" Nov 24 06:58:38.300427 containerd[1532]: time="2025-11-24T06:58:38.298158125Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 06:58:38.321043 containerd[1532]: time="2025-11-24T06:58:38.319207496Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 06:58:38.401894 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 24 06:58:38.447317 systemd-logind[1496]: Watching system buttons on /dev/input/event2 (Power Button) Nov 24 06:58:38.456404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:58:38.553176 locksmithd[1550]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 06:58:38.614030 containerd[1532]: time="2025-11-24T06:58:38.613970613Z" level=info msg="Start subscribing containerd event" Nov 24 06:58:38.614246 containerd[1532]: time="2025-11-24T06:58:38.614224738Z" level=info msg="Start recovering state" Nov 24 06:58:38.614433 containerd[1532]: time="2025-11-24T06:58:38.614415091Z" level=info msg="Start event monitor" Nov 24 06:58:38.614514 containerd[1532]: time="2025-11-24T06:58:38.614500604Z" level=info msg="Start cni network conf syncer for default" Nov 24 06:58:38.614579 containerd[1532]: time="2025-11-24T06:58:38.614566631Z" level=info msg="Start streaming server" Nov 24 06:58:38.614704 containerd[1532]: time="2025-11-24T06:58:38.614687109Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 06:58:38.614834 containerd[1532]: time="2025-11-24T06:58:38.614818526Z" level=info msg="runtime interface starting up..." Nov 24 06:58:38.614893 containerd[1532]: time="2025-11-24T06:58:38.614881490Z" level=info msg="starting plugins..." Nov 24 06:58:38.614967 containerd[1532]: time="2025-11-24T06:58:38.614954798Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 06:58:38.616356 containerd[1532]: time="2025-11-24T06:58:38.616311675Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 06:58:38.617921 containerd[1532]: time="2025-11-24T06:58:38.617864729Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 06:58:38.619918 containerd[1532]: time="2025-11-24T06:58:38.618678527Z" level=info msg="containerd successfully booted in 0.454224s" Nov 24 06:58:38.618819 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 06:58:38.798579 sshd_keygen[1531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 06:58:38.809442 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 06:58:38.809995 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:58:38.813757 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:58:38.818050 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:58:38.821752 kernel: EDAC MC: Ver: 3.0.0 Nov 24 06:58:38.822410 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 06:58:38.900521 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 06:58:38.901084 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:58:38.908181 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 06:58:38.914180 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:58:38.918805 systemd-networkd[1416]: eth1: Gained IPv6LL Nov 24 06:58:38.919409 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Nov 24 06:58:38.927970 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 06:58:38.930498 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 06:58:38.936286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:58:38.940201 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 06:58:38.948804 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 06:58:38.958801 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 06:58:38.995348 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 06:58:38.998255 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 06:58:39.002225 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 06:58:39.027820 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:58:39.032332 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 06:58:39.036447 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 06:58:39.042753 tar[1511]: linux-amd64/README.md Nov 24 06:58:39.043117 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 06:58:39.044898 systemd-networkd[1416]: eth0: Gained IPv6LL Nov 24 06:58:39.046890 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Nov 24 06:58:39.049116 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 06:58:39.049895 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 06:58:39.094865 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 06:58:40.143909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:58:40.146285 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 06:58:40.151246 systemd[1]: Startup finished in 3.345s (kernel) + 6.088s (initrd) + 6.461s (userspace) = 15.895s. Nov 24 06:58:40.154225 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 06:58:40.763768 kubelet[1668]: E1124 06:58:40.763670 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 06:58:40.766884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 06:58:40.767125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 06:58:40.767747 systemd[1]: kubelet.service: Consumed 1.270s CPU time, 255.5M memory peak. Nov 24 06:58:41.579487 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 06:58:41.581043 systemd[1]: Started sshd@0-164.90.155.191:22-139.178.68.195:51122.service - OpenSSH per-connection server daemon (139.178.68.195:51122). Nov 24 06:58:41.698873 sshd[1680]: Accepted publickey for core from 139.178.68.195 port 51122 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 06:58:41.701755 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:58:41.710863 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 06:58:41.712464 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 06:58:41.726822 systemd-logind[1496]: New session 1 of user core. Nov 24 06:58:41.743686 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 06:58:41.747981 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 06:58:41.765103 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 06:58:41.768923 systemd-logind[1496]: New session c1 of user core. Nov 24 06:58:41.925956 systemd[1685]: Queued start job for default target default.target. Nov 24 06:58:41.937208 systemd[1685]: Created slice app.slice - User Application Slice. Nov 24 06:58:41.937254 systemd[1685]: Reached target paths.target - Paths. Nov 24 06:58:41.937434 systemd[1685]: Reached target timers.target - Timers. Nov 24 06:58:41.939361 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 06:58:41.955484 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 06:58:41.955818 systemd[1685]: Reached target sockets.target - Sockets. Nov 24 06:58:41.955975 systemd[1685]: Reached target basic.target - Basic System. Nov 24 06:58:41.956099 systemd[1685]: Reached target default.target - Main User Target. Nov 24 06:58:41.956121 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 06:58:41.956236 systemd[1685]: Startup finished in 178ms. Nov 24 06:58:41.964013 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 06:58:42.039045 systemd[1]: Started sshd@1-164.90.155.191:22-139.178.68.195:51124.service - OpenSSH per-connection server daemon (139.178.68.195:51124). Nov 24 06:58:42.109076 sshd[1696]: Accepted publickey for core from 139.178.68.195 port 51124 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 06:58:42.110904 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:58:42.118799 systemd-logind[1496]: New session 2 of user core. Nov 24 06:58:42.129069 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 06:58:42.193822 sshd[1699]: Connection closed by 139.178.68.195 port 51124 Nov 24 06:58:42.193611 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Nov 24 06:58:42.205433 systemd[1]: sshd@1-164.90.155.191:22-139.178.68.195:51124.service: Deactivated successfully. Nov 24 06:58:42.207914 systemd[1]: session-2.scope: Deactivated successfully. Nov 24 06:58:42.209886 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. Nov 24 06:58:42.214169 systemd[1]: Started sshd@2-164.90.155.191:22-139.178.68.195:51136.service - OpenSSH per-connection server daemon (139.178.68.195:51136). Nov 24 06:58:42.216346 systemd-logind[1496]: Removed session 2. Nov 24 06:58:42.276890 sshd[1705]: Accepted publickey for core from 139.178.68.195 port 51136 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 06:58:42.278357 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:58:42.283946 systemd-logind[1496]: New session 3 of user core. Nov 24 06:58:42.292041 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 06:58:42.350837 sshd[1708]: Connection closed by 139.178.68.195 port 51136 Nov 24 06:58:42.351565 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Nov 24 06:58:42.366610 systemd[1]: sshd@2-164.90.155.191:22-139.178.68.195:51136.service: Deactivated successfully. Nov 24 06:58:42.372238 systemd[1]: session-3.scope: Deactivated successfully. Nov 24 06:58:42.373826 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. Nov 24 06:58:42.377335 systemd[1]: Started sshd@3-164.90.155.191:22-139.178.68.195:51138.service - OpenSSH per-connection server daemon (139.178.68.195:51138). Nov 24 06:58:42.378148 systemd-logind[1496]: Removed session 3. Nov 24 06:58:42.438174 sshd[1714]: Accepted publickey for core from 139.178.68.195 port 51138 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 06:58:42.439822 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:58:42.445927 systemd-logind[1496]: New session 4 of user core. Nov 24 06:58:42.456150 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 06:58:42.521849 sshd[1717]: Connection closed by 139.178.68.195 port 51138 Nov 24 06:58:42.521246 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Nov 24 06:58:42.536830 systemd[1]: sshd@3-164.90.155.191:22-139.178.68.195:51138.service: Deactivated successfully. Nov 24 06:58:42.539046 systemd[1]: session-4.scope: Deactivated successfully. Nov 24 06:58:42.540189 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. Nov 24 06:58:42.544277 systemd[1]: Started sshd@4-164.90.155.191:22-139.178.68.195:51148.service - OpenSSH per-connection server daemon (139.178.68.195:51148). Nov 24 06:58:42.545302 systemd-logind[1496]: Removed session 4. Nov 24 06:58:42.612238 sshd[1723]: Accepted publickey for core from 139.178.68.195 port 51148 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 06:58:42.614224 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:58:42.620808 systemd-logind[1496]: New session 5 of user core. Nov 24 06:58:42.630062 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 06:58:42.714150 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 06:58:42.714512 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 06:58:42.740495 sudo[1727]: pam_unix(sudo:session): session closed for user root Nov 24 06:58:42.746744 sshd[1726]: Connection closed by 139.178.68.195 port 51148 Nov 24 06:58:42.745997 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Nov 24 06:58:42.759516 systemd[1]: sshd@4-164.90.155.191:22-139.178.68.195:51148.service: Deactivated successfully. Nov 24 06:58:42.762476 systemd[1]: session-5.scope: Deactivated successfully. Nov 24 06:58:42.763764 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. Nov 24 06:58:42.768284 systemd[1]: Started sshd@5-164.90.155.191:22-139.178.68.195:51150.service - OpenSSH per-connection server daemon (139.178.68.195:51150). Nov 24 06:58:42.769300 systemd-logind[1496]: Removed session 5. Nov 24 06:58:42.835619 sshd[1733]: Accepted publickey for core from 139.178.68.195 port 51150 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 06:58:42.837145 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:58:42.842706 systemd-logind[1496]: New session 6 of user core. Nov 24 06:58:42.857019 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 06:58:42.919232 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 06:58:42.919973 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 06:58:42.926397 sudo[1738]: pam_unix(sudo:session): session closed for user root Nov 24 06:58:42.933865 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 06:58:42.934365 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 06:58:42.949409 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 06:58:42.998387 augenrules[1760]: No rules Nov 24 06:58:43.000301 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 06:58:43.000580 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 06:58:43.002022 sudo[1737]: pam_unix(sudo:session): session closed for user root Nov 24 06:58:43.005878 sshd[1736]: Connection closed by 139.178.68.195 port 51150 Nov 24 06:58:43.006692 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Nov 24 06:58:43.019330 systemd[1]: sshd@5-164.90.155.191:22-139.178.68.195:51150.service: Deactivated successfully. Nov 24 06:58:43.021918 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 06:58:43.023829 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Nov 24 06:58:43.027061 systemd[1]: Started sshd@6-164.90.155.191:22-139.178.68.195:51152.service - OpenSSH per-connection server daemon (139.178.68.195:51152). Nov 24 06:58:43.029650 systemd-logind[1496]: Removed session 6. Nov 24 06:58:43.098781 sshd[1769]: Accepted publickey for core from 139.178.68.195 port 51152 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 06:58:43.100657 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:58:43.107799 systemd-logind[1496]: New session 7 of user core. Nov 24 06:58:43.117135 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 06:58:43.178606 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 06:58:43.179597 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 06:58:43.678252 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 06:58:43.709749 (dockerd)[1790]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 06:58:44.108236 dockerd[1790]: time="2025-11-24T06:58:44.107592806Z" level=info msg="Starting up" Nov 24 06:58:44.113591 dockerd[1790]: time="2025-11-24T06:58:44.113525705Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 06:58:44.135567 dockerd[1790]: time="2025-11-24T06:58:44.135501775Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 06:58:44.226576 systemd[1]: var-lib-docker-metacopy\x2dcheck329172056-merged.mount: Deactivated successfully. Nov 24 06:58:44.242019 dockerd[1790]: time="2025-11-24T06:58:44.241946560Z" level=info msg="Loading containers: start." Nov 24 06:58:44.254797 kernel: Initializing XFRM netlink socket Nov 24 06:58:44.559749 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Nov 24 06:58:44.572668 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Nov 24 06:58:44.623752 systemd-networkd[1416]: docker0: Link UP Nov 24 06:58:44.624046 systemd-timesyncd[1397]: Network configuration changed, trying to establish connection. Nov 24 06:58:44.629193 dockerd[1790]: time="2025-11-24T06:58:44.628987943Z" level=info msg="Loading containers: done." Nov 24 06:58:44.649683 dockerd[1790]: time="2025-11-24T06:58:44.649117431Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 06:58:44.649683 dockerd[1790]: time="2025-11-24T06:58:44.649238794Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 06:58:44.649683 dockerd[1790]: time="2025-11-24T06:58:44.649402153Z" level=info msg="Initializing buildkit" Nov 24 06:58:44.655608 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2437418728-merged.mount: Deactivated successfully. Nov 24 06:58:44.678678 dockerd[1790]: time="2025-11-24T06:58:44.678351088Z" level=info msg="Completed buildkit initialization" Nov 24 06:58:44.687759 dockerd[1790]: time="2025-11-24T06:58:44.687676523Z" level=info msg="Daemon has completed initialization" Nov 24 06:58:44.688074 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 06:58:44.688894 dockerd[1790]: time="2025-11-24T06:58:44.688806200Z" level=info msg="API listen on /run/docker.sock" Nov 24 06:58:45.382335 containerd[1532]: time="2025-11-24T06:58:45.382221166Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.2\"" Nov 24 06:58:46.174627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount409709976.mount: Deactivated successfully. Nov 24 06:58:47.287048 containerd[1532]: time="2025-11-24T06:58:47.286978132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:47.290759 containerd[1532]: time="2025-11-24T06:58:47.289496865Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.2: active requests=0, bytes read=27063531" Nov 24 06:58:47.290759 containerd[1532]: time="2025-11-24T06:58:47.289796200Z" level=info msg="ImageCreate event name:\"sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:47.296448 containerd[1532]: time="2025-11-24T06:58:47.296392276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:47.296941 containerd[1532]: time="2025-11-24T06:58:47.296903513Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.2\" with image id \"sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077\", size \"27060130\" in 1.914632205s" Nov 24 06:58:47.297029 containerd[1532]: time="2025-11-24T06:58:47.296948483Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.2\" returns image reference \"sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85\"" Nov 24 06:58:47.297614 containerd[1532]: time="2025-11-24T06:58:47.297574819Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.2\"" Nov 24 06:58:48.667228 containerd[1532]: time="2025-11-24T06:58:48.667160929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:48.668627 containerd[1532]: time="2025-11-24T06:58:48.668221859Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.2: active requests=0, bytes read=21161621" Nov 24 06:58:48.669320 containerd[1532]: time="2025-11-24T06:58:48.669277889Z" level=info msg="ImageCreate event name:\"sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:48.672273 containerd[1532]: time="2025-11-24T06:58:48.672217665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:48.673774 containerd[1532]: time="2025-11-24T06:58:48.673729027Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.2\" with image id \"sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb\", size \"22818657\" in 1.376102317s" Nov 24 06:58:48.673774 containerd[1532]: time="2025-11-24T06:58:48.673768468Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.2\" returns image reference \"sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8\"" Nov 24 06:58:48.674688 containerd[1532]: time="2025-11-24T06:58:48.674653829Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.2\"" Nov 24 06:58:49.696771 containerd[1532]: time="2025-11-24T06:58:49.696512695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:49.697741 containerd[1532]: time="2025-11-24T06:58:49.697667864Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.2: active requests=0, bytes read=15725218" Nov 24 06:58:49.699744 containerd[1532]: time="2025-11-24T06:58:49.698400807Z" level=info msg="ImageCreate event name:\"sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:49.701502 containerd[1532]: time="2025-11-24T06:58:49.701446540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:49.702833 containerd[1532]: time="2025-11-24T06:58:49.702790294Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.2\" with image id \"sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6\", size \"17382272\" in 1.027964196s" Nov 24 06:58:49.703003 containerd[1532]: time="2025-11-24T06:58:49.702981175Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.2\" returns image reference \"sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952\"" Nov 24 06:58:49.703517 containerd[1532]: time="2025-11-24T06:58:49.703477196Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.2\"" Nov 24 06:58:50.865149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 06:58:50.869021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:58:51.018187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776314532.mount: Deactivated successfully. Nov 24 06:58:51.097932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:58:51.111697 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 06:58:51.187579 kubelet[2090]: E1124 06:58:51.187033 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 06:58:51.193347 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 06:58:51.193686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 06:58:51.194389 systemd[1]: kubelet.service: Consumed 220ms CPU time, 110.3M memory peak. Nov 24 06:58:51.483025 containerd[1532]: time="2025-11-24T06:58:51.482892104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:51.484343 containerd[1532]: time="2025-11-24T06:58:51.484285908Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.2: active requests=0, bytes read=25964463" Nov 24 06:58:51.484895 containerd[1532]: time="2025-11-24T06:58:51.484854772Z" level=info msg="ImageCreate event name:\"sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:51.487355 containerd[1532]: time="2025-11-24T06:58:51.487285874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:51.488660 containerd[1532]: time="2025-11-24T06:58:51.488603436Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.2\" with image id \"sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45\", repo tag \"registry.k8s.io/kube-proxy:v1.34.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5\", size \"25963482\" in 1.785084845s" Nov 24 06:58:51.488660 containerd[1532]: time="2025-11-24T06:58:51.488649058Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.2\" returns image reference \"sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45\"" Nov 24 06:58:51.489574 containerd[1532]: time="2025-11-24T06:58:51.489478097Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 24 06:58:51.538345 systemd-resolved[1381]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 24 06:58:52.072517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3431293289.mount: Deactivated successfully. Nov 24 06:58:53.052354 containerd[1532]: time="2025-11-24T06:58:53.052256164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:53.053816 containerd[1532]: time="2025-11-24T06:58:53.053765447Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 24 06:58:53.054303 containerd[1532]: time="2025-11-24T06:58:53.054233567Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:53.056755 containerd[1532]: time="2025-11-24T06:58:53.056535119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:53.057753 containerd[1532]: time="2025-11-24T06:58:53.057619605Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.568116178s" Nov 24 06:58:53.057753 containerd[1532]: time="2025-11-24T06:58:53.057653394Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 24 06:58:53.058482 containerd[1532]: time="2025-11-24T06:58:53.058181039Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 24 06:58:53.534446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194228745.mount: Deactivated successfully. Nov 24 06:58:53.541268 containerd[1532]: time="2025-11-24T06:58:53.540098286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:53.541268 containerd[1532]: time="2025-11-24T06:58:53.541007943Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 24 06:58:53.541268 containerd[1532]: time="2025-11-24T06:58:53.541192642Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:53.543816 containerd[1532]: time="2025-11-24T06:58:53.543773459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:53.545203 containerd[1532]: time="2025-11-24T06:58:53.544753199Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 486.530072ms" Nov 24 06:58:53.545203 containerd[1532]: time="2025-11-24T06:58:53.544799432Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 24 06:58:53.545387 containerd[1532]: time="2025-11-24T06:58:53.545354002Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 24 06:58:54.199664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2693165141.mount: Deactivated successfully. Nov 24 06:58:54.597159 systemd-resolved[1381]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 24 06:58:56.414935 containerd[1532]: time="2025-11-24T06:58:56.414802860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:56.417029 containerd[1532]: time="2025-11-24T06:58:56.416953279Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Nov 24 06:58:56.417432 containerd[1532]: time="2025-11-24T06:58:56.417354274Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:56.420407 containerd[1532]: time="2025-11-24T06:58:56.420330608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:58:56.422779 containerd[1532]: time="2025-11-24T06:58:56.421838235Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 2.876438209s" Nov 24 06:58:56.422779 containerd[1532]: time="2025-11-24T06:58:56.421903541Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 24 06:59:00.321800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:59:00.322178 systemd[1]: kubelet.service: Consumed 220ms CPU time, 110.3M memory peak. Nov 24 06:59:00.325336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:59:00.372022 systemd[1]: Reload requested from client PID 2234 ('systemctl') (unit session-7.scope)... Nov 24 06:59:00.372050 systemd[1]: Reloading... Nov 24 06:59:00.535884 zram_generator::config[2277]: No configuration found. Nov 24 06:59:00.821671 systemd[1]: Reloading finished in 448 ms. Nov 24 06:59:00.884561 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 06:59:00.884682 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 06:59:00.885006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:59:00.885066 systemd[1]: kubelet.service: Consumed 146ms CPU time, 98.2M memory peak. Nov 24 06:59:00.887359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:59:01.076072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:59:01.088395 (kubelet)[2331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 06:59:01.198109 kubelet[2331]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 06:59:01.198109 kubelet[2331]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 06:59:01.204808 kubelet[2331]: I1124 06:59:01.204597 2331 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 06:59:02.097221 kubelet[2331]: I1124 06:59:02.097155 2331 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 24 06:59:02.097221 kubelet[2331]: I1124 06:59:02.097207 2331 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 06:59:02.097521 kubelet[2331]: I1124 06:59:02.097246 2331 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 24 06:59:02.097521 kubelet[2331]: I1124 06:59:02.097256 2331 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 06:59:02.097678 kubelet[2331]: I1124 06:59:02.097605 2331 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 06:59:02.110347 kubelet[2331]: I1124 06:59:02.110262 2331 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 06:59:02.111478 kubelet[2331]: E1124 06:59:02.110695 2331 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://164.90.155.191:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 164.90.155.191:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 24 06:59:02.120845 kubelet[2331]: I1124 06:59:02.120807 2331 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 06:59:02.127953 kubelet[2331]: I1124 06:59:02.127902 2331 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 24 06:59:02.131899 kubelet[2331]: I1124 06:59:02.131790 2331 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 06:59:02.133734 kubelet[2331]: I1124 06:59:02.131876 2331 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.1-c-f92aac29d7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 06:59:02.133734 kubelet[2331]: I1124 06:59:02.133695 2331 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 06:59:02.133734 kubelet[2331]: I1124 06:59:02.133742 2331 container_manager_linux.go:306] "Creating device plugin manager" Nov 24 06:59:02.134141 kubelet[2331]: I1124 06:59:02.133963 2331 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 24 06:59:02.136569 kubelet[2331]: I1124 06:59:02.136515 2331 state_mem.go:36] "Initialized new in-memory state store" Nov 24 06:59:02.136879 kubelet[2331]: I1124 06:59:02.136848 2331 kubelet.go:475] "Attempting to sync node with API server" Nov 24 06:59:02.136879 kubelet[2331]: I1124 06:59:02.136876 2331 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 06:59:02.137025 kubelet[2331]: I1124 06:59:02.136908 2331 kubelet.go:387] "Adding apiserver pod source" Nov 24 06:59:02.137025 kubelet[2331]: I1124 06:59:02.136929 2331 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 06:59:02.142799 kubelet[2331]: E1124 06:59:02.141863 2331 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://164.90.155.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.90.155.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 06:59:02.142799 kubelet[2331]: E1124 06:59:02.142076 2331 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://164.90.155.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.1-c-f92aac29d7&limit=500&resourceVersion=0\": dial tcp 164.90.155.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 06:59:02.143299 kubelet[2331]: I1124 06:59:02.143272 2331 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 06:59:02.144516 kubelet[2331]: I1124 06:59:02.144470 2331 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 06:59:02.144626 kubelet[2331]: I1124 06:59:02.144527 2331 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 24 06:59:02.144626 kubelet[2331]: W1124 06:59:02.144615 2331 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 06:59:02.150634 kubelet[2331]: I1124 06:59:02.150598 2331 server.go:1262] "Started kubelet" Nov 24 06:59:02.154346 kubelet[2331]: I1124 06:59:02.154301 2331 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 06:59:02.160147 kubelet[2331]: I1124 06:59:02.159545 2331 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 06:59:02.160147 kubelet[2331]: I1124 06:59:02.159681 2331 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 24 06:59:02.160147 kubelet[2331]: I1124 06:59:02.159928 2331 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 06:59:02.160147 kubelet[2331]: I1124 06:59:02.160041 2331 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 06:59:02.164781 kubelet[2331]: I1124 06:59:02.164735 2331 server.go:310] "Adding debug handlers to kubelet server" Nov 24 06:59:02.172101 kubelet[2331]: E1124 06:59:02.166941 2331 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.90.155.191:6443/api/v1/namespaces/default/events\": dial tcp 164.90.155.191:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.1-c-f92aac29d7.187adf223933d3d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.1-c-f92aac29d7,UID:ci-4459.2.1-c-f92aac29d7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.1-c-f92aac29d7,},FirstTimestamp:2025-11-24 06:59:02.150550487 +0000 UTC m=+1.056371359,LastTimestamp:2025-11-24 06:59:02.150550487 +0000 UTC m=+1.056371359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.1-c-f92aac29d7,}" Nov 24 06:59:02.172823 kubelet[2331]: I1124 06:59:02.172796 2331 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 24 06:59:02.173295 kubelet[2331]: E1124 06:59:02.173267 2331 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.1-c-f92aac29d7\" not found" Nov 24 06:59:02.174064 kubelet[2331]: I1124 06:59:02.173481 2331 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 06:59:02.176999 kubelet[2331]: E1124 06:59:02.176943 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.155.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-c-f92aac29d7?timeout=10s\": dial tcp 164.90.155.191:6443: connect: connection refused" interval="200ms" Nov 24 06:59:02.177674 kubelet[2331]: I1124 06:59:02.177641 2331 factory.go:223] Registration of the systemd container factory successfully Nov 24 06:59:02.178004 kubelet[2331]: I1124 06:59:02.177794 2331 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 06:59:02.178918 kubelet[2331]: I1124 06:59:02.178896 2331 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 06:59:02.179146 kubelet[2331]: I1124 06:59:02.179131 2331 reconciler.go:29] "Reconciler: start to sync state" Nov 24 06:59:02.180782 kubelet[2331]: I1124 06:59:02.180738 2331 factory.go:223] Registration of the containerd container factory successfully Nov 24 06:59:02.196072 kubelet[2331]: I1124 06:59:02.195654 2331 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 24 06:59:02.197233 kubelet[2331]: I1124 06:59:02.197144 2331 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 24 06:59:02.197233 kubelet[2331]: I1124 06:59:02.197175 2331 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 24 06:59:02.197233 kubelet[2331]: I1124 06:59:02.197205 2331 kubelet.go:2427] "Starting kubelet main sync loop" Nov 24 06:59:02.197443 kubelet[2331]: E1124 06:59:02.197266 2331 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 06:59:02.209408 kubelet[2331]: E1124 06:59:02.209311 2331 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://164.90.155.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.90.155.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 06:59:02.210171 kubelet[2331]: E1124 06:59:02.209569 2331 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 06:59:02.210171 kubelet[2331]: E1124 06:59:02.209988 2331 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://164.90.155.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.90.155.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 06:59:02.217362 kubelet[2331]: I1124 06:59:02.217265 2331 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 06:59:02.217362 kubelet[2331]: I1124 06:59:02.217295 2331 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 06:59:02.217362 kubelet[2331]: I1124 06:59:02.217327 2331 state_mem.go:36] "Initialized new in-memory state store" Nov 24 06:59:02.219484 kubelet[2331]: I1124 06:59:02.219450 2331 policy_none.go:49] "None policy: Start" Nov 24 06:59:02.219484 kubelet[2331]: I1124 06:59:02.219479 2331 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 24 06:59:02.219484 kubelet[2331]: I1124 06:59:02.219495 2331 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 24 06:59:02.220869 kubelet[2331]: I1124 06:59:02.220841 2331 policy_none.go:47] "Start" Nov 24 06:59:02.227194 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 06:59:02.245392 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 06:59:02.252019 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 06:59:02.263149 kubelet[2331]: E1124 06:59:02.263106 2331 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 06:59:02.263654 kubelet[2331]: I1124 06:59:02.263626 2331 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 06:59:02.263839 kubelet[2331]: I1124 06:59:02.263790 2331 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 06:59:02.264423 kubelet[2331]: I1124 06:59:02.264397 2331 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 06:59:02.266977 kubelet[2331]: E1124 06:59:02.266943 2331 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 06:59:02.267209 kubelet[2331]: E1124 06:59:02.267185 2331 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.1-c-f92aac29d7\" not found" Nov 24 06:59:02.312045 systemd[1]: Created slice kubepods-burstable-podac89e3954f83fc413c4f45f1f27840d4.slice - libcontainer container kubepods-burstable-podac89e3954f83fc413c4f45f1f27840d4.slice. Nov 24 06:59:02.329909 kubelet[2331]: E1124 06:59:02.329862 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-c-f92aac29d7\" not found" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.335134 systemd[1]: Created slice kubepods-burstable-podb292ce6ce977935b6a4e8368ad478e2e.slice - libcontainer container kubepods-burstable-podb292ce6ce977935b6a4e8368ad478e2e.slice. Nov 24 06:59:02.339581 kubelet[2331]: E1124 06:59:02.339527 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-c-f92aac29d7\" not found" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.343515 systemd[1]: Created slice kubepods-burstable-podf922c021a54d41be346662dfc48bb952.slice - libcontainer container kubepods-burstable-podf922c021a54d41be346662dfc48bb952.slice. Nov 24 06:59:02.345445 kubelet[2331]: E1124 06:59:02.345414 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-c-f92aac29d7\" not found" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.366160 kubelet[2331]: I1124 06:59:02.366060 2331 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.366915 kubelet[2331]: E1124 06:59:02.366879 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.90.155.191:6443/api/v1/nodes\": dial tcp 164.90.155.191:6443: connect: connection refused" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.377817 kubelet[2331]: E1124 06:59:02.377706 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.155.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-c-f92aac29d7?timeout=10s\": dial tcp 164.90.155.191:6443: connect: connection refused" interval="400ms" Nov 24 06:59:02.380283 kubelet[2331]: I1124 06:59:02.380206 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b292ce6ce977935b6a4e8368ad478e2e-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.1-c-f92aac29d7\" (UID: \"b292ce6ce977935b6a4e8368ad478e2e\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.380283 kubelet[2331]: I1124 06:59:02.380251 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f922c021a54d41be346662dfc48bb952-kubeconfig\") pod \"kube-scheduler-ci-4459.2.1-c-f92aac29d7\" (UID: \"f922c021a54d41be346662dfc48bb952\") " pod="kube-system/kube-scheduler-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.380283 kubelet[2331]: I1124 06:59:02.380272 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac89e3954f83fc413c4f45f1f27840d4-ca-certs\") pod \"kube-apiserver-ci-4459.2.1-c-f92aac29d7\" (UID: \"ac89e3954f83fc413c4f45f1f27840d4\") " pod="kube-system/kube-apiserver-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.380283 kubelet[2331]: I1124 06:59:02.380286 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac89e3954f83fc413c4f45f1f27840d4-k8s-certs\") pod \"kube-apiserver-ci-4459.2.1-c-f92aac29d7\" (UID: \"ac89e3954f83fc413c4f45f1f27840d4\") " pod="kube-system/kube-apiserver-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.380283 kubelet[2331]: I1124 06:59:02.380302 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac89e3954f83fc413c4f45f1f27840d4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.1-c-f92aac29d7\" (UID: \"ac89e3954f83fc413c4f45f1f27840d4\") " pod="kube-system/kube-apiserver-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.380634 kubelet[2331]: I1124 06:59:02.380316 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b292ce6ce977935b6a4e8368ad478e2e-ca-certs\") pod \"kube-controller-manager-ci-4459.2.1-c-f92aac29d7\" (UID: \"b292ce6ce977935b6a4e8368ad478e2e\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.380634 kubelet[2331]: I1124 06:59:02.380332 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b292ce6ce977935b6a4e8368ad478e2e-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.1-c-f92aac29d7\" (UID: \"b292ce6ce977935b6a4e8368ad478e2e\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.380634 kubelet[2331]: I1124 06:59:02.380346 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b292ce6ce977935b6a4e8368ad478e2e-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.1-c-f92aac29d7\" (UID: \"b292ce6ce977935b6a4e8368ad478e2e\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.380634 kubelet[2331]: I1124 06:59:02.380368 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b292ce6ce977935b6a4e8368ad478e2e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.1-c-f92aac29d7\" (UID: \"b292ce6ce977935b6a4e8368ad478e2e\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.568957 kubelet[2331]: I1124 06:59:02.568864 2331 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.569548 kubelet[2331]: E1124 06:59:02.569458 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.90.155.191:6443/api/v1/nodes\": dial tcp 164.90.155.191:6443: connect: connection refused" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.633113 kubelet[2331]: E1124 06:59:02.632924 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:02.634697 containerd[1532]: time="2025-11-24T06:59:02.634611735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.1-c-f92aac29d7,Uid:ac89e3954f83fc413c4f45f1f27840d4,Namespace:kube-system,Attempt:0,}" Nov 24 06:59:02.637655 systemd-resolved[1381]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Nov 24 06:59:02.642264 kubelet[2331]: E1124 06:59:02.642185 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:02.649107 containerd[1532]: time="2025-11-24T06:59:02.648708162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.1-c-f92aac29d7,Uid:b292ce6ce977935b6a4e8368ad478e2e,Namespace:kube-system,Attempt:0,}" Nov 24 06:59:02.655127 kubelet[2331]: E1124 06:59:02.655058 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:02.656734 containerd[1532]: time="2025-11-24T06:59:02.656619134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.1-c-f92aac29d7,Uid:f922c021a54d41be346662dfc48bb952,Namespace:kube-system,Attempt:0,}" Nov 24 06:59:02.779319 kubelet[2331]: E1124 06:59:02.779262 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.155.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-c-f92aac29d7?timeout=10s\": dial tcp 164.90.155.191:6443: connect: connection refused" interval="800ms" Nov 24 06:59:02.972014 kubelet[2331]: I1124 06:59:02.971706 2331 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:02.972492 kubelet[2331]: E1124 06:59:02.972348 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.90.155.191:6443/api/v1/nodes\": dial tcp 164.90.155.191:6443: connect: connection refused" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:03.104919 kubelet[2331]: E1124 06:59:03.104764 2331 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.90.155.191:6443/api/v1/namespaces/default/events\": dial tcp 164.90.155.191:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.1-c-f92aac29d7.187adf223933d3d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.1-c-f92aac29d7,UID:ci-4459.2.1-c-f92aac29d7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.1-c-f92aac29d7,},FirstTimestamp:2025-11-24 06:59:02.150550487 +0000 UTC m=+1.056371359,LastTimestamp:2025-11-24 06:59:02.150550487 +0000 UTC m=+1.056371359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.1-c-f92aac29d7,}" Nov 24 06:59:03.259201 kubelet[2331]: E1124 06:59:03.259049 2331 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://164.90.155.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.1-c-f92aac29d7&limit=500&resourceVersion=0\": dial tcp 164.90.155.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 06:59:03.303481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322592234.mount: Deactivated successfully. Nov 24 06:59:03.309320 containerd[1532]: time="2025-11-24T06:59:03.309248327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 06:59:03.311509 containerd[1532]: time="2025-11-24T06:59:03.311442704Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 24 06:59:03.312484 containerd[1532]: time="2025-11-24T06:59:03.312250239Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 06:59:03.313840 containerd[1532]: time="2025-11-24T06:59:03.313783647Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 24 06:59:03.314207 containerd[1532]: time="2025-11-24T06:59:03.314172220Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 06:59:03.315619 containerd[1532]: time="2025-11-24T06:59:03.315570704Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 24 06:59:03.316829 containerd[1532]: time="2025-11-24T06:59:03.316197479Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 06:59:03.317956 containerd[1532]: time="2025-11-24T06:59:03.317896631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 06:59:03.320478 containerd[1532]: time="2025-11-24T06:59:03.319370588Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 683.236848ms" Nov 24 06:59:03.320478 containerd[1532]: time="2025-11-24T06:59:03.320447465Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 662.538353ms" Nov 24 06:59:03.321763 containerd[1532]: time="2025-11-24T06:59:03.321662938Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 666.253934ms" Nov 24 06:59:03.359459 kubelet[2331]: E1124 06:59:03.359405 2331 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://164.90.155.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.90.155.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 06:59:03.381079 kubelet[2331]: E1124 06:59:03.381027 2331 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://164.90.155.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.90.155.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 06:59:03.417182 containerd[1532]: time="2025-11-24T06:59:03.417121968Z" level=info msg="connecting to shim 43954dc9c0dae79dbd199d2483264f147e6edc352a2868ccc596c4d0050aa871" address="unix:///run/containerd/s/ed45a3800c247e9a6ee0959490c55862256ae43cdb68f9261494daf42ab04e6f" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:59:03.417439 kubelet[2331]: E1124 06:59:03.417358 2331 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://164.90.155.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.90.155.191:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 06:59:03.418971 containerd[1532]: time="2025-11-24T06:59:03.418928147Z" level=info msg="connecting to shim aff8bfc8f55530855152723f4f2aabb4e719e12716a783f2d6a3715fdd2d2af1" address="unix:///run/containerd/s/d7b02dc2816ed0e00fa2920263a576dfbb4fd57b42a274b7b12834f153f478d0" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:59:03.427697 containerd[1532]: time="2025-11-24T06:59:03.427629622Z" level=info msg="connecting to shim f70d883b843db6466d1789bead2f122906dc0eee5fcfbde3ccc3beb0338e8462" address="unix:///run/containerd/s/a328a4aaefe386197afa76e5f7c394a3818a9488c008afebc30d4f602b601b45" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:59:03.539147 systemd[1]: Started cri-containerd-43954dc9c0dae79dbd199d2483264f147e6edc352a2868ccc596c4d0050aa871.scope - libcontainer container 43954dc9c0dae79dbd199d2483264f147e6edc352a2868ccc596c4d0050aa871. Nov 24 06:59:03.553298 systemd[1]: Started cri-containerd-aff8bfc8f55530855152723f4f2aabb4e719e12716a783f2d6a3715fdd2d2af1.scope - libcontainer container aff8bfc8f55530855152723f4f2aabb4e719e12716a783f2d6a3715fdd2d2af1. Nov 24 06:59:03.557152 systemd[1]: Started cri-containerd-f70d883b843db6466d1789bead2f122906dc0eee5fcfbde3ccc3beb0338e8462.scope - libcontainer container f70d883b843db6466d1789bead2f122906dc0eee5fcfbde3ccc3beb0338e8462. Nov 24 06:59:03.580608 kubelet[2331]: E1124 06:59:03.580550 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.155.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-c-f92aac29d7?timeout=10s\": dial tcp 164.90.155.191:6443: connect: connection refused" interval="1.6s" Nov 24 06:59:03.668081 containerd[1532]: time="2025-11-24T06:59:03.668010834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.1-c-f92aac29d7,Uid:ac89e3954f83fc413c4f45f1f27840d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"43954dc9c0dae79dbd199d2483264f147e6edc352a2868ccc596c4d0050aa871\"" Nov 24 06:59:03.672155 kubelet[2331]: E1124 06:59:03.672106 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:03.684993 containerd[1532]: time="2025-11-24T06:59:03.684797548Z" level=info msg="CreateContainer within sandbox \"43954dc9c0dae79dbd199d2483264f147e6edc352a2868ccc596c4d0050aa871\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 06:59:03.697167 containerd[1532]: time="2025-11-24T06:59:03.697072743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.1-c-f92aac29d7,Uid:b292ce6ce977935b6a4e8368ad478e2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"aff8bfc8f55530855152723f4f2aabb4e719e12716a783f2d6a3715fdd2d2af1\"" Nov 24 06:59:03.698303 kubelet[2331]: E1124 06:59:03.698263 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:03.705951 containerd[1532]: time="2025-11-24T06:59:03.705904871Z" level=info msg="Container 1eebeb2083667947c91c153d5cec5c07848da7f54b413071d78ef0083e065ace: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:59:03.712454 containerd[1532]: time="2025-11-24T06:59:03.711624365Z" level=info msg="CreateContainer within sandbox \"aff8bfc8f55530855152723f4f2aabb4e719e12716a783f2d6a3715fdd2d2af1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 06:59:03.715025 containerd[1532]: time="2025-11-24T06:59:03.714850908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.1-c-f92aac29d7,Uid:f922c021a54d41be346662dfc48bb952,Namespace:kube-system,Attempt:0,} returns sandbox id \"f70d883b843db6466d1789bead2f122906dc0eee5fcfbde3ccc3beb0338e8462\"" Nov 24 06:59:03.718928 containerd[1532]: time="2025-11-24T06:59:03.718862453Z" level=info msg="Container 7ac0c55108bcea537992e2ed97309c143283c8d57ec706cf30c2267b176187af: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:59:03.728894 kubelet[2331]: E1124 06:59:03.728851 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:03.734275 containerd[1532]: time="2025-11-24T06:59:03.734195998Z" level=info msg="CreateContainer within sandbox \"f70d883b843db6466d1789bead2f122906dc0eee5fcfbde3ccc3beb0338e8462\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 06:59:03.735422 containerd[1532]: time="2025-11-24T06:59:03.735233253Z" level=info msg="CreateContainer within sandbox \"43954dc9c0dae79dbd199d2483264f147e6edc352a2868ccc596c4d0050aa871\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1eebeb2083667947c91c153d5cec5c07848da7f54b413071d78ef0083e065ace\"" Nov 24 06:59:03.736894 containerd[1532]: time="2025-11-24T06:59:03.736850678Z" level=info msg="StartContainer for \"1eebeb2083667947c91c153d5cec5c07848da7f54b413071d78ef0083e065ace\"" Nov 24 06:59:03.739818 containerd[1532]: time="2025-11-24T06:59:03.739618707Z" level=info msg="CreateContainer within sandbox \"aff8bfc8f55530855152723f4f2aabb4e719e12716a783f2d6a3715fdd2d2af1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7ac0c55108bcea537992e2ed97309c143283c8d57ec706cf30c2267b176187af\"" Nov 24 06:59:03.740310 containerd[1532]: time="2025-11-24T06:59:03.740278444Z" level=info msg="StartContainer for \"7ac0c55108bcea537992e2ed97309c143283c8d57ec706cf30c2267b176187af\"" Nov 24 06:59:03.741516 containerd[1532]: time="2025-11-24T06:59:03.741422948Z" level=info msg="connecting to shim 1eebeb2083667947c91c153d5cec5c07848da7f54b413071d78ef0083e065ace" address="unix:///run/containerd/s/ed45a3800c247e9a6ee0959490c55862256ae43cdb68f9261494daf42ab04e6f" protocol=ttrpc version=3 Nov 24 06:59:03.746751 containerd[1532]: time="2025-11-24T06:59:03.745965162Z" level=info msg="connecting to shim 7ac0c55108bcea537992e2ed97309c143283c8d57ec706cf30c2267b176187af" address="unix:///run/containerd/s/d7b02dc2816ed0e00fa2920263a576dfbb4fd57b42a274b7b12834f153f478d0" protocol=ttrpc version=3 Nov 24 06:59:03.748643 containerd[1532]: time="2025-11-24T06:59:03.748590661Z" level=info msg="Container 37640cce5f7a6b7b2280334489c62ef43a4eb17bc9a845fa349e7e80f10a3582: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:59:03.772266 containerd[1532]: time="2025-11-24T06:59:03.771798025Z" level=info msg="CreateContainer within sandbox \"f70d883b843db6466d1789bead2f122906dc0eee5fcfbde3ccc3beb0338e8462\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"37640cce5f7a6b7b2280334489c62ef43a4eb17bc9a845fa349e7e80f10a3582\"" Nov 24 06:59:03.773701 containerd[1532]: time="2025-11-24T06:59:03.773652167Z" level=info msg="StartContainer for \"37640cce5f7a6b7b2280334489c62ef43a4eb17bc9a845fa349e7e80f10a3582\"" Nov 24 06:59:03.775871 containerd[1532]: time="2025-11-24T06:59:03.775453563Z" level=info msg="connecting to shim 37640cce5f7a6b7b2280334489c62ef43a4eb17bc9a845fa349e7e80f10a3582" address="unix:///run/containerd/s/a328a4aaefe386197afa76e5f7c394a3818a9488c008afebc30d4f602b601b45" protocol=ttrpc version=3 Nov 24 06:59:03.776013 kubelet[2331]: I1124 06:59:03.775645 2331 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:03.776945 kubelet[2331]: E1124 06:59:03.776902 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://164.90.155.191:6443/api/v1/nodes\": dial tcp 164.90.155.191:6443: connect: connection refused" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:03.791958 systemd[1]: Started cri-containerd-1eebeb2083667947c91c153d5cec5c07848da7f54b413071d78ef0083e065ace.scope - libcontainer container 1eebeb2083667947c91c153d5cec5c07848da7f54b413071d78ef0083e065ace. Nov 24 06:59:03.793379 systemd[1]: Started cri-containerd-7ac0c55108bcea537992e2ed97309c143283c8d57ec706cf30c2267b176187af.scope - libcontainer container 7ac0c55108bcea537992e2ed97309c143283c8d57ec706cf30c2267b176187af. Nov 24 06:59:03.814447 systemd[1]: Started cri-containerd-37640cce5f7a6b7b2280334489c62ef43a4eb17bc9a845fa349e7e80f10a3582.scope - libcontainer container 37640cce5f7a6b7b2280334489c62ef43a4eb17bc9a845fa349e7e80f10a3582. Nov 24 06:59:03.930783 containerd[1532]: time="2025-11-24T06:59:03.930530612Z" level=info msg="StartContainer for \"7ac0c55108bcea537992e2ed97309c143283c8d57ec706cf30c2267b176187af\" returns successfully" Nov 24 06:59:03.936369 containerd[1532]: time="2025-11-24T06:59:03.936309298Z" level=info msg="StartContainer for \"1eebeb2083667947c91c153d5cec5c07848da7f54b413071d78ef0083e065ace\" returns successfully" Nov 24 06:59:03.994106 containerd[1532]: time="2025-11-24T06:59:03.993993042Z" level=info msg="StartContainer for \"37640cce5f7a6b7b2280334489c62ef43a4eb17bc9a845fa349e7e80f10a3582\" returns successfully" Nov 24 06:59:04.225514 kubelet[2331]: E1124 06:59:04.225468 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-c-f92aac29d7\" not found" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:04.225758 kubelet[2331]: E1124 06:59:04.225734 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:04.233012 kubelet[2331]: E1124 06:59:04.232951 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-c-f92aac29d7\" not found" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:04.233226 kubelet[2331]: E1124 06:59:04.233170 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:04.235196 kubelet[2331]: E1124 06:59:04.235156 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-c-f92aac29d7\" not found" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:04.235464 kubelet[2331]: E1124 06:59:04.235293 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:05.238567 kubelet[2331]: E1124 06:59:05.238521 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-c-f92aac29d7\" not found" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:05.239030 kubelet[2331]: E1124 06:59:05.238806 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:05.241524 kubelet[2331]: E1124 06:59:05.241477 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-c-f92aac29d7\" not found" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:05.241765 kubelet[2331]: E1124 06:59:05.241732 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:05.379162 kubelet[2331]: I1124 06:59:05.379124 2331 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:06.240525 kubelet[2331]: E1124 06:59:06.240476 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-c-f92aac29d7\" not found" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:06.242265 kubelet[2331]: E1124 06:59:06.240695 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:06.244438 kubelet[2331]: E1124 06:59:06.244402 2331 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-c-f92aac29d7\" not found" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:06.244610 kubelet[2331]: E1124 06:59:06.244565 2331 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:06.678476 kubelet[2331]: E1124 06:59:06.678434 2331 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.1-c-f92aac29d7\" not found" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:06.768522 kubelet[2331]: I1124 06:59:06.768470 2331 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:06.777742 kubelet[2331]: I1124 06:59:06.774691 2331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:06.811683 kubelet[2331]: E1124 06:59:06.811643 2331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.1-c-f92aac29d7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:06.812803 kubelet[2331]: I1124 06:59:06.812772 2331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:06.817065 kubelet[2331]: E1124 06:59:06.817022 2331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.1-c-f92aac29d7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:06.817247 kubelet[2331]: I1124 06:59:06.817234 2331 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:06.823111 kubelet[2331]: E1124 06:59:06.823023 2331 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.1-c-f92aac29d7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:07.141161 kubelet[2331]: I1124 06:59:07.141090 2331 apiserver.go:52] "Watching apiserver" Nov 24 06:59:07.179508 kubelet[2331]: I1124 06:59:07.179437 2331 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 06:59:09.064631 systemd[1]: Reload requested from client PID 2615 ('systemctl') (unit session-7.scope)... Nov 24 06:59:09.064651 systemd[1]: Reloading... Nov 24 06:59:09.186777 zram_generator::config[2667]: No configuration found. Nov 24 06:59:09.436751 systemd[1]: Reloading finished in 371 ms. Nov 24 06:59:09.478040 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:59:09.478686 kubelet[2331]: I1124 06:59:09.478197 2331 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 06:59:09.502332 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 06:59:09.502660 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:59:09.502743 systemd[1]: kubelet.service: Consumed 1.536s CPU time, 121.7M memory peak. Nov 24 06:59:09.505866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:59:09.696899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:59:09.712557 (kubelet)[2708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 06:59:09.786801 kubelet[2708]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 06:59:09.786801 kubelet[2708]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 06:59:09.787200 kubelet[2708]: I1124 06:59:09.786860 2708 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 06:59:09.802043 kubelet[2708]: I1124 06:59:09.801999 2708 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 24 06:59:09.802043 kubelet[2708]: I1124 06:59:09.802030 2708 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 06:59:09.802043 kubelet[2708]: I1124 06:59:09.802059 2708 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 24 06:59:09.802043 kubelet[2708]: I1124 06:59:09.802065 2708 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 06:59:09.802414 kubelet[2708]: I1124 06:59:09.802393 2708 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 06:59:09.805624 kubelet[2708]: I1124 06:59:09.805585 2708 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 24 06:59:09.818421 kubelet[2708]: I1124 06:59:09.818368 2708 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 06:59:09.824949 kubelet[2708]: I1124 06:59:09.824915 2708 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 06:59:09.830771 kubelet[2708]: I1124 06:59:09.829367 2708 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 24 06:59:09.830771 kubelet[2708]: I1124 06:59:09.829639 2708 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 06:59:09.830771 kubelet[2708]: I1124 06:59:09.829678 2708 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.1-c-f92aac29d7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 06:59:09.830771 kubelet[2708]: I1124 06:59:09.829947 2708 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 06:59:09.831064 kubelet[2708]: I1124 06:59:09.829958 2708 container_manager_linux.go:306] "Creating device plugin manager" Nov 24 06:59:09.831064 kubelet[2708]: I1124 06:59:09.829988 2708 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 24 06:59:09.831064 kubelet[2708]: I1124 06:59:09.830802 2708 state_mem.go:36] "Initialized new in-memory state store" Nov 24 06:59:09.831064 kubelet[2708]: I1124 06:59:09.830981 2708 kubelet.go:475] "Attempting to sync node with API server" Nov 24 06:59:09.831064 kubelet[2708]: I1124 06:59:09.830994 2708 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 06:59:09.831064 kubelet[2708]: I1124 06:59:09.831015 2708 kubelet.go:387] "Adding apiserver pod source" Nov 24 06:59:09.831064 kubelet[2708]: I1124 06:59:09.831035 2708 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 06:59:09.840611 kubelet[2708]: I1124 06:59:09.840575 2708 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 06:59:09.841254 kubelet[2708]: I1124 06:59:09.841101 2708 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 06:59:09.841254 kubelet[2708]: I1124 06:59:09.841135 2708 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 24 06:59:09.851317 kubelet[2708]: I1124 06:59:09.851284 2708 server.go:1262] "Started kubelet" Nov 24 06:59:09.857047 kubelet[2708]: I1124 06:59:09.856520 2708 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 06:59:09.867818 kubelet[2708]: I1124 06:59:09.867772 2708 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 06:59:09.876166 kubelet[2708]: I1124 06:59:09.875128 2708 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 24 06:59:09.879768 kubelet[2708]: I1124 06:59:09.879700 2708 server.go:310] "Adding debug handlers to kubelet server" Nov 24 06:59:09.881743 kubelet[2708]: I1124 06:59:09.880817 2708 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 06:59:09.884400 kubelet[2708]: I1124 06:59:09.882987 2708 reconciler.go:29] "Reconciler: start to sync state" Nov 24 06:59:09.896469 kubelet[2708]: I1124 06:59:09.896414 2708 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 06:59:09.896869 kubelet[2708]: I1124 06:59:09.896851 2708 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 24 06:59:09.897938 kubelet[2708]: I1124 06:59:09.897916 2708 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 06:59:09.898444 kubelet[2708]: I1124 06:59:09.898420 2708 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 06:59:09.908543 kubelet[2708]: I1124 06:59:09.908508 2708 factory.go:223] Registration of the systemd container factory successfully Nov 24 06:59:09.908881 kubelet[2708]: I1124 06:59:09.908856 2708 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 06:59:09.914018 kubelet[2708]: I1124 06:59:09.913951 2708 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 24 06:59:09.917065 kubelet[2708]: I1124 06:59:09.917020 2708 factory.go:223] Registration of the containerd container factory successfully Nov 24 06:59:09.919315 kubelet[2708]: I1124 06:59:09.919122 2708 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 24 06:59:09.919315 kubelet[2708]: I1124 06:59:09.919153 2708 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 24 06:59:09.919315 kubelet[2708]: I1124 06:59:09.919192 2708 kubelet.go:2427] "Starting kubelet main sync loop" Nov 24 06:59:09.919315 kubelet[2708]: E1124 06:59:09.919252 2708 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 06:59:09.986378 kubelet[2708]: I1124 06:59:09.985252 2708 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 06:59:09.988772 kubelet[2708]: I1124 06:59:09.986680 2708 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 06:59:09.988772 kubelet[2708]: I1124 06:59:09.986752 2708 state_mem.go:36] "Initialized new in-memory state store" Nov 24 06:59:09.988772 kubelet[2708]: I1124 06:59:09.986979 2708 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 06:59:09.988772 kubelet[2708]: I1124 06:59:09.987004 2708 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 06:59:09.988772 kubelet[2708]: I1124 06:59:09.987032 2708 policy_none.go:49] "None policy: Start" Nov 24 06:59:09.988772 kubelet[2708]: I1124 06:59:09.987048 2708 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 24 06:59:09.988772 kubelet[2708]: I1124 06:59:09.987064 2708 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 24 06:59:09.988772 kubelet[2708]: I1124 06:59:09.987220 2708 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 24 06:59:09.988772 kubelet[2708]: I1124 06:59:09.987235 2708 policy_none.go:47] "Start" Nov 24 06:59:10.000758 kubelet[2708]: E1124 06:59:10.000616 2708 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 06:59:10.001137 kubelet[2708]: I1124 06:59:10.001055 2708 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 06:59:10.001393 kubelet[2708]: I1124 06:59:10.001288 2708 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 06:59:10.002340 kubelet[2708]: I1124 06:59:10.002308 2708 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 06:59:10.007705 kubelet[2708]: E1124 06:59:10.007674 2708 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 06:59:10.022058 kubelet[2708]: I1124 06:59:10.021684 2708 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.025335 kubelet[2708]: I1124 06:59:10.024992 2708 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.030333 kubelet[2708]: I1124 06:59:10.030285 2708 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.047969 kubelet[2708]: I1124 06:59:10.047930 2708 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 06:59:10.048936 kubelet[2708]: I1124 06:59:10.048902 2708 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 06:59:10.057447 kubelet[2708]: I1124 06:59:10.057399 2708 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 06:59:10.083602 kubelet[2708]: I1124 06:59:10.083536 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f922c021a54d41be346662dfc48bb952-kubeconfig\") pod \"kube-scheduler-ci-4459.2.1-c-f92aac29d7\" (UID: \"f922c021a54d41be346662dfc48bb952\") " pod="kube-system/kube-scheduler-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.083602 kubelet[2708]: I1124 06:59:10.083585 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b292ce6ce977935b6a4e8368ad478e2e-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.1-c-f92aac29d7\" (UID: \"b292ce6ce977935b6a4e8368ad478e2e\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.083602 kubelet[2708]: I1124 06:59:10.083617 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b292ce6ce977935b6a4e8368ad478e2e-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.1-c-f92aac29d7\" (UID: \"b292ce6ce977935b6a4e8368ad478e2e\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.084073 kubelet[2708]: I1124 06:59:10.083634 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac89e3954f83fc413c4f45f1f27840d4-ca-certs\") pod \"kube-apiserver-ci-4459.2.1-c-f92aac29d7\" (UID: \"ac89e3954f83fc413c4f45f1f27840d4\") " pod="kube-system/kube-apiserver-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.084073 kubelet[2708]: I1124 06:59:10.083654 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac89e3954f83fc413c4f45f1f27840d4-k8s-certs\") pod \"kube-apiserver-ci-4459.2.1-c-f92aac29d7\" (UID: \"ac89e3954f83fc413c4f45f1f27840d4\") " pod="kube-system/kube-apiserver-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.084073 kubelet[2708]: I1124 06:59:10.083681 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac89e3954f83fc413c4f45f1f27840d4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.1-c-f92aac29d7\" (UID: \"ac89e3954f83fc413c4f45f1f27840d4\") " pod="kube-system/kube-apiserver-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.084073 kubelet[2708]: I1124 06:59:10.083704 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b292ce6ce977935b6a4e8368ad478e2e-ca-certs\") pod \"kube-controller-manager-ci-4459.2.1-c-f92aac29d7\" (UID: \"b292ce6ce977935b6a4e8368ad478e2e\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.084073 kubelet[2708]: I1124 06:59:10.083857 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b292ce6ce977935b6a4e8368ad478e2e-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.1-c-f92aac29d7\" (UID: \"b292ce6ce977935b6a4e8368ad478e2e\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.084209 kubelet[2708]: I1124 06:59:10.083875 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b292ce6ce977935b6a4e8368ad478e2e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.1-c-f92aac29d7\" (UID: \"b292ce6ce977935b6a4e8368ad478e2e\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.111106 kubelet[2708]: I1124 06:59:10.111073 2708 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.126825 kubelet[2708]: I1124 06:59:10.126598 2708 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.127368 kubelet[2708]: I1124 06:59:10.127174 2708 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.354781 kubelet[2708]: E1124 06:59:10.354256 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:10.355800 kubelet[2708]: E1124 06:59:10.355775 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:10.360763 kubelet[2708]: E1124 06:59:10.360636 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:10.838672 kubelet[2708]: I1124 06:59:10.838562 2708 apiserver.go:52] "Watching apiserver" Nov 24 06:59:10.881055 kubelet[2708]: I1124 06:59:10.880991 2708 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 06:59:10.966747 kubelet[2708]: E1124 06:59:10.966557 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:10.967171 kubelet[2708]: E1124 06:59:10.967139 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:10.967594 kubelet[2708]: I1124 06:59:10.967386 2708 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.979757 kubelet[2708]: I1124 06:59:10.978225 2708 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 24 06:59:10.979757 kubelet[2708]: E1124 06:59:10.978307 2708 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.1-c-f92aac29d7\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:10.979757 kubelet[2708]: E1124 06:59:10.978523 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:11.014565 kubelet[2708]: I1124 06:59:11.014143 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.1-c-f92aac29d7" podStartSLOduration=1.01407878 podStartE2EDuration="1.01407878s" podCreationTimestamp="2025-11-24 06:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:59:10.998582981 +0000 UTC m=+1.277530245" watchObservedRunningTime="2025-11-24 06:59:11.01407878 +0000 UTC m=+1.293026036" Nov 24 06:59:11.016584 kubelet[2708]: I1124 06:59:11.016483 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.1-c-f92aac29d7" podStartSLOduration=1.016445605 podStartE2EDuration="1.016445605s" podCreationTimestamp="2025-11-24 06:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:59:11.015502953 +0000 UTC m=+1.294450208" watchObservedRunningTime="2025-11-24 06:59:11.016445605 +0000 UTC m=+1.295392868" Nov 24 06:59:11.969330 kubelet[2708]: E1124 06:59:11.969289 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:11.970536 kubelet[2708]: E1124 06:59:11.970004 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:12.971537 kubelet[2708]: E1124 06:59:12.971468 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:14.799483 kubelet[2708]: I1124 06:59:14.799447 2708 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 06:59:14.800527 containerd[1532]: time="2025-11-24T06:59:14.800324017Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 06:59:14.801337 kubelet[2708]: I1124 06:59:14.801257 2708 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 06:59:15.094471 systemd-timesyncd[1397]: Contacted time server 38.45.64.130:123 (2.flatcar.pool.ntp.org). Nov 24 06:59:15.094571 systemd-timesyncd[1397]: Initial clock synchronization to Mon 2025-11-24 06:59:15.404519 UTC. Nov 24 06:59:15.894989 kubelet[2708]: I1124 06:59:15.894883 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.1-c-f92aac29d7" podStartSLOduration=5.894831609 podStartE2EDuration="5.894831609s" podCreationTimestamp="2025-11-24 06:59:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:59:11.028364091 +0000 UTC m=+1.307311367" watchObservedRunningTime="2025-11-24 06:59:15.894831609 +0000 UTC m=+6.173778869" Nov 24 06:59:15.911989 systemd[1]: Created slice kubepods-besteffort-podffef9ee7_23f0_4177_bf41_5297aee49c25.slice - libcontainer container kubepods-besteffort-podffef9ee7_23f0_4177_bf41_5297aee49c25.slice. Nov 24 06:59:15.918848 kubelet[2708]: I1124 06:59:15.918768 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ffef9ee7-23f0-4177-bf41-5297aee49c25-kube-proxy\") pod \"kube-proxy-6kj2j\" (UID: \"ffef9ee7-23f0-4177-bf41-5297aee49c25\") " pod="kube-system/kube-proxy-6kj2j" Nov 24 06:59:15.919125 kubelet[2708]: I1124 06:59:15.919076 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffef9ee7-23f0-4177-bf41-5297aee49c25-xtables-lock\") pod \"kube-proxy-6kj2j\" (UID: \"ffef9ee7-23f0-4177-bf41-5297aee49c25\") " pod="kube-system/kube-proxy-6kj2j" Nov 24 06:59:15.919125 kubelet[2708]: I1124 06:59:15.919101 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffef9ee7-23f0-4177-bf41-5297aee49c25-lib-modules\") pod \"kube-proxy-6kj2j\" (UID: \"ffef9ee7-23f0-4177-bf41-5297aee49c25\") " pod="kube-system/kube-proxy-6kj2j" Nov 24 06:59:15.921833 kubelet[2708]: I1124 06:59:15.921791 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhvh4\" (UniqueName: \"kubernetes.io/projected/ffef9ee7-23f0-4177-bf41-5297aee49c25-kube-api-access-bhvh4\") pod \"kube-proxy-6kj2j\" (UID: \"ffef9ee7-23f0-4177-bf41-5297aee49c25\") " pod="kube-system/kube-proxy-6kj2j" Nov 24 06:59:16.032078 systemd[1]: Created slice kubepods-besteffort-poddef6c8a4_591c_4e6a_b9da_6c7bd460f15b.slice - libcontainer container kubepods-besteffort-poddef6c8a4_591c_4e6a_b9da_6c7bd460f15b.slice. Nov 24 06:59:16.124826 kubelet[2708]: I1124 06:59:16.124763 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/def6c8a4-591c-4e6a-b9da-6c7bd460f15b-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-6gzq5\" (UID: \"def6c8a4-591c-4e6a-b9da-6c7bd460f15b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-6gzq5" Nov 24 06:59:16.124826 kubelet[2708]: I1124 06:59:16.124839 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5s75\" (UniqueName: \"kubernetes.io/projected/def6c8a4-591c-4e6a-b9da-6c7bd460f15b-kube-api-access-j5s75\") pod \"tigera-operator-65cdcdfd6d-6gzq5\" (UID: \"def6c8a4-591c-4e6a-b9da-6c7bd460f15b\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-6gzq5" Nov 24 06:59:16.229756 kubelet[2708]: E1124 06:59:16.229525 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:16.244222 containerd[1532]: time="2025-11-24T06:59:16.243908318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6kj2j,Uid:ffef9ee7-23f0-4177-bf41-5297aee49c25,Namespace:kube-system,Attempt:0,}" Nov 24 06:59:16.276495 containerd[1532]: time="2025-11-24T06:59:16.275948372Z" level=info msg="connecting to shim 04c788aabf2f228fabf3e0c92c010a30a4cf7479e99caa2bd8dfa59e9b060454" address="unix:///run/containerd/s/1ae119d7d4c277f67ef56fabcde46acd04f1b24202e133cdfe51c490ed8895d8" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:59:16.314547 systemd[1]: Started cri-containerd-04c788aabf2f228fabf3e0c92c010a30a4cf7479e99caa2bd8dfa59e9b060454.scope - libcontainer container 04c788aabf2f228fabf3e0c92c010a30a4cf7479e99caa2bd8dfa59e9b060454. Nov 24 06:59:16.343773 containerd[1532]: time="2025-11-24T06:59:16.343546079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-6gzq5,Uid:def6c8a4-591c-4e6a-b9da-6c7bd460f15b,Namespace:tigera-operator,Attempt:0,}" Nov 24 06:59:16.366477 containerd[1532]: time="2025-11-24T06:59:16.366420660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6kj2j,Uid:ffef9ee7-23f0-4177-bf41-5297aee49c25,Namespace:kube-system,Attempt:0,} returns sandbox id \"04c788aabf2f228fabf3e0c92c010a30a4cf7479e99caa2bd8dfa59e9b060454\"" Nov 24 06:59:16.368987 kubelet[2708]: E1124 06:59:16.368940 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:16.379790 containerd[1532]: time="2025-11-24T06:59:16.378699053Z" level=info msg="connecting to shim b9f94bc3eddfe19d7e8ca772f968c739bed53a9a35db7fabfdc738408824b40c" address="unix:///run/containerd/s/71dabe13b31866d70b5f7e64d64c7c98d97bbe83bdd761c6f18371a2e6cc2cc6" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:59:16.381412 containerd[1532]: time="2025-11-24T06:59:16.381354723Z" level=info msg="CreateContainer within sandbox \"04c788aabf2f228fabf3e0c92c010a30a4cf7479e99caa2bd8dfa59e9b060454\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 06:59:16.397397 containerd[1532]: time="2025-11-24T06:59:16.397322596Z" level=info msg="Container 28845c8b1886a64d62f009e29f0d510451d025f7ca34ca2cf62874e5e04ae93c: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:59:16.409803 containerd[1532]: time="2025-11-24T06:59:16.409746188Z" level=info msg="CreateContainer within sandbox \"04c788aabf2f228fabf3e0c92c010a30a4cf7479e99caa2bd8dfa59e9b060454\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"28845c8b1886a64d62f009e29f0d510451d025f7ca34ca2cf62874e5e04ae93c\"" Nov 24 06:59:16.413127 containerd[1532]: time="2025-11-24T06:59:16.412710146Z" level=info msg="StartContainer for \"28845c8b1886a64d62f009e29f0d510451d025f7ca34ca2cf62874e5e04ae93c\"" Nov 24 06:59:16.418424 systemd[1]: Started cri-containerd-b9f94bc3eddfe19d7e8ca772f968c739bed53a9a35db7fabfdc738408824b40c.scope - libcontainer container b9f94bc3eddfe19d7e8ca772f968c739bed53a9a35db7fabfdc738408824b40c. Nov 24 06:59:16.420181 containerd[1532]: time="2025-11-24T06:59:16.420072152Z" level=info msg="connecting to shim 28845c8b1886a64d62f009e29f0d510451d025f7ca34ca2cf62874e5e04ae93c" address="unix:///run/containerd/s/1ae119d7d4c277f67ef56fabcde46acd04f1b24202e133cdfe51c490ed8895d8" protocol=ttrpc version=3 Nov 24 06:59:16.474438 systemd[1]: Started cri-containerd-28845c8b1886a64d62f009e29f0d510451d025f7ca34ca2cf62874e5e04ae93c.scope - libcontainer container 28845c8b1886a64d62f009e29f0d510451d025f7ca34ca2cf62874e5e04ae93c. Nov 24 06:59:16.506792 containerd[1532]: time="2025-11-24T06:59:16.506218791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-6gzq5,Uid:def6c8a4-591c-4e6a-b9da-6c7bd460f15b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b9f94bc3eddfe19d7e8ca772f968c739bed53a9a35db7fabfdc738408824b40c\"" Nov 24 06:59:16.518858 containerd[1532]: time="2025-11-24T06:59:16.518702997Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 24 06:59:16.600181 containerd[1532]: time="2025-11-24T06:59:16.600106965Z" level=info msg="StartContainer for \"28845c8b1886a64d62f009e29f0d510451d025f7ca34ca2cf62874e5e04ae93c\" returns successfully" Nov 24 06:59:16.808563 kubelet[2708]: E1124 06:59:16.808290 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:16.987804 kubelet[2708]: E1124 06:59:16.987759 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:16.989164 kubelet[2708]: E1124 06:59:16.989131 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:17.033025 kubelet[2708]: I1124 06:59:17.032956 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6kj2j" podStartSLOduration=2.032933433 podStartE2EDuration="2.032933433s" podCreationTimestamp="2025-11-24 06:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:59:17.030835062 +0000 UTC m=+7.309782328" watchObservedRunningTime="2025-11-24 06:59:17.032933433 +0000 UTC m=+7.311880702" Nov 24 06:59:18.084023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379294139.mount: Deactivated successfully. Nov 24 06:59:18.089778 kubelet[2708]: E1124 06:59:18.088872 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:18.992684 kubelet[2708]: E1124 06:59:18.992612 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:19.978602 containerd[1532]: time="2025-11-24T06:59:19.978517015Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:19.979610 containerd[1532]: time="2025-11-24T06:59:19.979568331Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 24 06:59:19.981077 containerd[1532]: time="2025-11-24T06:59:19.980248116Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:19.983378 containerd[1532]: time="2025-11-24T06:59:19.983325771Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:19.984140 containerd[1532]: time="2025-11-24T06:59:19.984098635Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.465332268s" Nov 24 06:59:19.984140 containerd[1532]: time="2025-11-24T06:59:19.984135892Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 24 06:59:19.989647 containerd[1532]: time="2025-11-24T06:59:19.989577886Z" level=info msg="CreateContainer within sandbox \"b9f94bc3eddfe19d7e8ca772f968c739bed53a9a35db7fabfdc738408824b40c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 24 06:59:19.995136 kubelet[2708]: E1124 06:59:19.995093 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:20.020798 containerd[1532]: time="2025-11-24T06:59:20.020391579Z" level=info msg="Container 6c0a37034e788ef822b74ebd1372c73ab9c135cad09d0a62aaee04778af31788: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:59:20.029761 containerd[1532]: time="2025-11-24T06:59:20.029693056Z" level=info msg="CreateContainer within sandbox \"b9f94bc3eddfe19d7e8ca772f968c739bed53a9a35db7fabfdc738408824b40c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6c0a37034e788ef822b74ebd1372c73ab9c135cad09d0a62aaee04778af31788\"" Nov 24 06:59:20.031171 containerd[1532]: time="2025-11-24T06:59:20.031125094Z" level=info msg="StartContainer for \"6c0a37034e788ef822b74ebd1372c73ab9c135cad09d0a62aaee04778af31788\"" Nov 24 06:59:20.032587 containerd[1532]: time="2025-11-24T06:59:20.032495411Z" level=info msg="connecting to shim 6c0a37034e788ef822b74ebd1372c73ab9c135cad09d0a62aaee04778af31788" address="unix:///run/containerd/s/71dabe13b31866d70b5f7e64d64c7c98d97bbe83bdd761c6f18371a2e6cc2cc6" protocol=ttrpc version=3 Nov 24 06:59:20.084054 systemd[1]: Started cri-containerd-6c0a37034e788ef822b74ebd1372c73ab9c135cad09d0a62aaee04778af31788.scope - libcontainer container 6c0a37034e788ef822b74ebd1372c73ab9c135cad09d0a62aaee04778af31788. Nov 24 06:59:20.140852 containerd[1532]: time="2025-11-24T06:59:20.140782739Z" level=info msg="StartContainer for \"6c0a37034e788ef822b74ebd1372c73ab9c135cad09d0a62aaee04778af31788\" returns successfully" Nov 24 06:59:21.024689 kubelet[2708]: I1124 06:59:21.022930 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-6gzq5" podStartSLOduration=2.554382544 podStartE2EDuration="6.022912687s" podCreationTimestamp="2025-11-24 06:59:15 +0000 UTC" firstStartedPulling="2025-11-24 06:59:16.517209431 +0000 UTC m=+6.796156693" lastFinishedPulling="2025-11-24 06:59:19.985739585 +0000 UTC m=+10.264686836" observedRunningTime="2025-11-24 06:59:21.021650283 +0000 UTC m=+11.300597550" watchObservedRunningTime="2025-11-24 06:59:21.022912687 +0000 UTC m=+11.301859952" Nov 24 06:59:21.945126 kubelet[2708]: E1124 06:59:21.945075 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:22.002333 kubelet[2708]: E1124 06:59:22.002286 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:23.260761 update_engine[1503]: I20251124 06:59:23.258810 1503 update_attempter.cc:509] Updating boot flags... Nov 24 06:59:25.624605 sudo[1773]: pam_unix(sudo:session): session closed for user root Nov 24 06:59:25.630361 sshd[1772]: Connection closed by 139.178.68.195 port 51152 Nov 24 06:59:25.633239 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Nov 24 06:59:25.643113 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Nov 24 06:59:25.643242 systemd[1]: sshd@6-164.90.155.191:22-139.178.68.195:51152.service: Deactivated successfully. Nov 24 06:59:25.647256 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 06:59:25.647457 systemd[1]: session-7.scope: Consumed 6.520s CPU time, 168.4M memory peak. Nov 24 06:59:25.652525 systemd-logind[1496]: Removed session 7. Nov 24 06:59:32.579708 systemd[1]: Created slice kubepods-besteffort-pod440f660c_12b2_4dad_871c_188727ea88fb.slice - libcontainer container kubepods-besteffort-pod440f660c_12b2_4dad_871c_188727ea88fb.slice. Nov 24 06:59:32.652411 kubelet[2708]: I1124 06:59:32.652214 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/440f660c-12b2-4dad-871c-188727ea88fb-tigera-ca-bundle\") pod \"calico-typha-6fb75f966d-j5mcc\" (UID: \"440f660c-12b2-4dad-871c-188727ea88fb\") " pod="calico-system/calico-typha-6fb75f966d-j5mcc" Nov 24 06:59:32.652983 kubelet[2708]: I1124 06:59:32.652426 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g98mk\" (UniqueName: \"kubernetes.io/projected/440f660c-12b2-4dad-871c-188727ea88fb-kube-api-access-g98mk\") pod \"calico-typha-6fb75f966d-j5mcc\" (UID: \"440f660c-12b2-4dad-871c-188727ea88fb\") " pod="calico-system/calico-typha-6fb75f966d-j5mcc" Nov 24 06:59:32.652983 kubelet[2708]: I1124 06:59:32.652584 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/440f660c-12b2-4dad-871c-188727ea88fb-typha-certs\") pod \"calico-typha-6fb75f966d-j5mcc\" (UID: \"440f660c-12b2-4dad-871c-188727ea88fb\") " pod="calico-system/calico-typha-6fb75f966d-j5mcc" Nov 24 06:59:32.835668 systemd[1]: Created slice kubepods-besteffort-pod86423dc1_c6e7_4ff3_9eb6_3daa19ee3f80.slice - libcontainer container kubepods-besteffort-pod86423dc1_c6e7_4ff3_9eb6_3daa19ee3f80.slice. Nov 24 06:59:32.854370 kubelet[2708]: I1124 06:59:32.853676 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80-cni-log-dir\") pod \"calico-node-wctnl\" (UID: \"86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80\") " pod="calico-system/calico-node-wctnl" Nov 24 06:59:32.854370 kubelet[2708]: I1124 06:59:32.853757 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80-cni-bin-dir\") pod \"calico-node-wctnl\" (UID: \"86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80\") " pod="calico-system/calico-node-wctnl" Nov 24 06:59:32.854370 kubelet[2708]: I1124 06:59:32.853774 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80-cni-net-dir\") pod \"calico-node-wctnl\" (UID: \"86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80\") " pod="calico-system/calico-node-wctnl" Nov 24 06:59:32.854370 kubelet[2708]: I1124 06:59:32.853788 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80-lib-modules\") pod \"calico-node-wctnl\" (UID: \"86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80\") " pod="calico-system/calico-node-wctnl" Nov 24 06:59:32.854370 kubelet[2708]: I1124 06:59:32.854039 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80-node-certs\") pod \"calico-node-wctnl\" (UID: \"86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80\") " pod="calico-system/calico-node-wctnl" Nov 24 06:59:32.854704 kubelet[2708]: I1124 06:59:32.854079 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80-policysync\") pod \"calico-node-wctnl\" (UID: \"86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80\") " pod="calico-system/calico-node-wctnl" Nov 24 06:59:32.854704 kubelet[2708]: I1124 06:59:32.854096 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80-var-lib-calico\") pod \"calico-node-wctnl\" (UID: \"86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80\") " pod="calico-system/calico-node-wctnl" Nov 24 06:59:32.854704 kubelet[2708]: I1124 06:59:32.854113 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80-xtables-lock\") pod \"calico-node-wctnl\" (UID: \"86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80\") " pod="calico-system/calico-node-wctnl" Nov 24 06:59:32.854704 kubelet[2708]: I1124 06:59:32.854134 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80-tigera-ca-bundle\") pod \"calico-node-wctnl\" (UID: \"86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80\") " pod="calico-system/calico-node-wctnl" Nov 24 06:59:32.854704 kubelet[2708]: I1124 06:59:32.854156 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80-var-run-calico\") pod \"calico-node-wctnl\" (UID: \"86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80\") " pod="calico-system/calico-node-wctnl" Nov 24 06:59:32.854891 kubelet[2708]: I1124 06:59:32.854200 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80-flexvol-driver-host\") pod \"calico-node-wctnl\" (UID: \"86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80\") " pod="calico-system/calico-node-wctnl" Nov 24 06:59:32.854891 kubelet[2708]: I1124 06:59:32.854229 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68t8r\" (UniqueName: \"kubernetes.io/projected/86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80-kube-api-access-68t8r\") pod \"calico-node-wctnl\" (UID: \"86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80\") " pod="calico-system/calico-node-wctnl" Nov 24 06:59:32.886615 kubelet[2708]: E1124 06:59:32.886567 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:32.889459 containerd[1532]: time="2025-11-24T06:59:32.888960704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fb75f966d-j5mcc,Uid:440f660c-12b2-4dad-871c-188727ea88fb,Namespace:calico-system,Attempt:0,}" Nov 24 06:59:32.907990 containerd[1532]: time="2025-11-24T06:59:32.907940979Z" level=info msg="connecting to shim 22c24b23c3322f59b34ded4f75e7dda459552a510ead99d50be9c46aa1a6452a" address="unix:///run/containerd/s/bf70f3fb23469824dd4f749b24f8275710269935acd42d8f842b87d83062d3da" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:59:32.951263 systemd[1]: Started cri-containerd-22c24b23c3322f59b34ded4f75e7dda459552a510ead99d50be9c46aa1a6452a.scope - libcontainer container 22c24b23c3322f59b34ded4f75e7dda459552a510ead99d50be9c46aa1a6452a. Nov 24 06:59:32.957165 kubelet[2708]: E1124 06:59:32.956921 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:32.957165 kubelet[2708]: W1124 06:59:32.956969 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:32.957939 kubelet[2708]: E1124 06:59:32.957593 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:32.958280 kubelet[2708]: E1124 06:59:32.958088 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:32.958957 kubelet[2708]: W1124 06:59:32.958802 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:32.958957 kubelet[2708]: E1124 06:59:32.958833 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:32.959550 kubelet[2708]: E1124 06:59:32.959533 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:32.959739 kubelet[2708]: W1124 06:59:32.959704 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:32.960306 kubelet[2708]: E1124 06:59:32.960179 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:32.960503 kubelet[2708]: E1124 06:59:32.960490 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:32.960676 kubelet[2708]: W1124 06:59:32.960656 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:32.960871 kubelet[2708]: E1124 06:59:32.960752 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:32.961096 kubelet[2708]: E1124 06:59:32.961085 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:32.961237 kubelet[2708]: W1124 06:59:32.961224 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:32.961514 kubelet[2708]: E1124 06:59:32.961310 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:32.963754 kubelet[2708]: E1124 06:59:32.962764 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:32.963754 kubelet[2708]: W1124 06:59:32.962780 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:32.963754 kubelet[2708]: E1124 06:59:32.962794 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:32.965159 kubelet[2708]: E1124 06:59:32.965131 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:32.965159 kubelet[2708]: W1124 06:59:32.965152 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:32.965159 kubelet[2708]: E1124 06:59:32.965172 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:32.965431 kubelet[2708]: E1124 06:59:32.965417 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:32.965431 kubelet[2708]: W1124 06:59:32.965429 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:32.965546 kubelet[2708]: E1124 06:59:32.965440 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:32.975300 kubelet[2708]: E1124 06:59:32.974049 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:32.975300 kubelet[2708]: W1124 06:59:32.974086 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:32.975300 kubelet[2708]: E1124 06:59:32.974119 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:32.988661 kubelet[2708]: E1124 06:59:32.988590 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:32.988661 kubelet[2708]: W1124 06:59:32.988618 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:32.988661 kubelet[2708]: E1124 06:59:32.988638 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.037451 kubelet[2708]: E1124 06:59:33.037164 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 06:59:33.123899 containerd[1532]: time="2025-11-24T06:59:33.123537075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fb75f966d-j5mcc,Uid:440f660c-12b2-4dad-871c-188727ea88fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"22c24b23c3322f59b34ded4f75e7dda459552a510ead99d50be9c46aa1a6452a\"" Nov 24 06:59:33.125581 kubelet[2708]: E1124 06:59:33.125224 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:33.127171 containerd[1532]: time="2025-11-24T06:59:33.127137861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 24 06:59:33.133145 kubelet[2708]: E1124 06:59:33.133106 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.133145 kubelet[2708]: W1124 06:59:33.133132 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.133576 kubelet[2708]: E1124 06:59:33.133547 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.135035 kubelet[2708]: E1124 06:59:33.135000 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.135035 kubelet[2708]: W1124 06:59:33.135022 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.135172 kubelet[2708]: E1124 06:59:33.135056 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.135906 kubelet[2708]: E1124 06:59:33.135881 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.135906 kubelet[2708]: W1124 06:59:33.135901 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.136017 kubelet[2708]: E1124 06:59:33.135920 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.136865 kubelet[2708]: E1124 06:59:33.136843 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.136865 kubelet[2708]: W1124 06:59:33.136861 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.137012 kubelet[2708]: E1124 06:59:33.136877 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.137815 kubelet[2708]: E1124 06:59:33.137792 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.137815 kubelet[2708]: W1124 06:59:33.137809 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.137918 kubelet[2708]: E1124 06:59:33.137824 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.138071 kubelet[2708]: E1124 06:59:33.138055 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.138107 kubelet[2708]: W1124 06:59:33.138073 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.138107 kubelet[2708]: E1124 06:59:33.138086 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.138276 kubelet[2708]: E1124 06:59:33.138264 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.138319 kubelet[2708]: W1124 06:59:33.138279 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.138319 kubelet[2708]: E1124 06:59:33.138289 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.139208 kubelet[2708]: E1124 06:59:33.139184 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.139208 kubelet[2708]: W1124 06:59:33.139200 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.139303 kubelet[2708]: E1124 06:59:33.139213 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.141124 kubelet[2708]: E1124 06:59:33.141094 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.141124 kubelet[2708]: W1124 06:59:33.141113 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.141124 kubelet[2708]: E1124 06:59:33.141127 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.141400 kubelet[2708]: E1124 06:59:33.141383 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.141400 kubelet[2708]: W1124 06:59:33.141396 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.141468 kubelet[2708]: E1124 06:59:33.141406 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.141599 kubelet[2708]: E1124 06:59:33.141583 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.141599 kubelet[2708]: W1124 06:59:33.141597 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.141670 kubelet[2708]: E1124 06:59:33.141606 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.142245 kubelet[2708]: E1124 06:59:33.142226 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.142245 kubelet[2708]: W1124 06:59:33.142240 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.142343 kubelet[2708]: E1124 06:59:33.142253 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.142674 kubelet[2708]: E1124 06:59:33.142657 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.142674 kubelet[2708]: W1124 06:59:33.142671 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.142779 kubelet[2708]: E1124 06:59:33.142684 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.143331 kubelet[2708]: E1124 06:59:33.143313 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.143331 kubelet[2708]: W1124 06:59:33.143327 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.143419 kubelet[2708]: E1124 06:59:33.143338 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.143618 kubelet[2708]: E1124 06:59:33.143599 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.143618 kubelet[2708]: W1124 06:59:33.143616 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.143756 kubelet[2708]: E1124 06:59:33.143626 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.144373 kubelet[2708]: E1124 06:59:33.144353 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.144373 kubelet[2708]: W1124 06:59:33.144367 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.144464 kubelet[2708]: E1124 06:59:33.144379 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.145505 kubelet[2708]: E1124 06:59:33.145485 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.145505 kubelet[2708]: W1124 06:59:33.145502 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.145505 kubelet[2708]: E1124 06:59:33.145513 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.145769 kubelet[2708]: E1124 06:59:33.145749 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.145769 kubelet[2708]: W1124 06:59:33.145756 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.145769 kubelet[2708]: E1124 06:59:33.145766 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.146039 kubelet[2708]: E1124 06:59:33.146024 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.146039 kubelet[2708]: W1124 06:59:33.146037 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.146111 kubelet[2708]: E1124 06:59:33.146048 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.146401 kubelet[2708]: E1124 06:59:33.146385 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.146401 kubelet[2708]: W1124 06:59:33.146399 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.146527 kubelet[2708]: E1124 06:59:33.146410 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.148426 kubelet[2708]: E1124 06:59:33.148395 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:33.149477 containerd[1532]: time="2025-11-24T06:59:33.149425410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wctnl,Uid:86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80,Namespace:calico-system,Attempt:0,}" Nov 24 06:59:33.158618 kubelet[2708]: E1124 06:59:33.158232 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.158618 kubelet[2708]: W1124 06:59:33.158289 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.158618 kubelet[2708]: E1124 06:59:33.158325 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.160481 kubelet[2708]: I1124 06:59:33.159019 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/31736d8f-1244-4ceb-aaba-f284117475ca-varrun\") pod \"csi-node-driver-jmszh\" (UID: \"31736d8f-1244-4ceb-aaba-f284117475ca\") " pod="calico-system/csi-node-driver-jmszh" Nov 24 06:59:33.160481 kubelet[2708]: E1124 06:59:33.159455 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.160481 kubelet[2708]: W1124 06:59:33.159977 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.160481 kubelet[2708]: E1124 06:59:33.160002 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.161307 kubelet[2708]: E1124 06:59:33.161258 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.161578 kubelet[2708]: W1124 06:59:33.161530 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.162595 kubelet[2708]: E1124 06:59:33.162479 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.163615 kubelet[2708]: E1124 06:59:33.163312 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.163934 kubelet[2708]: W1124 06:59:33.163738 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.163934 kubelet[2708]: E1124 06:59:33.163767 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.164604 kubelet[2708]: I1124 06:59:33.164485 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/31736d8f-1244-4ceb-aaba-f284117475ca-socket-dir\") pod \"csi-node-driver-jmszh\" (UID: \"31736d8f-1244-4ceb-aaba-f284117475ca\") " pod="calico-system/csi-node-driver-jmszh" Nov 24 06:59:33.165877 kubelet[2708]: E1124 06:59:33.165750 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.165877 kubelet[2708]: W1124 06:59:33.165829 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.166282 kubelet[2708]: E1124 06:59:33.166146 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.166599 kubelet[2708]: E1124 06:59:33.166429 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.166599 kubelet[2708]: W1124 06:59:33.166449 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.166599 kubelet[2708]: E1124 06:59:33.166467 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.166599 kubelet[2708]: I1124 06:59:33.166473 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lkh7\" (UniqueName: \"kubernetes.io/projected/31736d8f-1244-4ceb-aaba-f284117475ca-kube-api-access-9lkh7\") pod \"csi-node-driver-jmszh\" (UID: \"31736d8f-1244-4ceb-aaba-f284117475ca\") " pod="calico-system/csi-node-driver-jmszh" Nov 24 06:59:33.167615 kubelet[2708]: E1124 06:59:33.166674 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.167615 kubelet[2708]: W1124 06:59:33.166683 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.167615 kubelet[2708]: E1124 06:59:33.166693 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.167615 kubelet[2708]: E1124 06:59:33.167323 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.167615 kubelet[2708]: W1124 06:59:33.167337 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.167615 kubelet[2708]: E1124 06:59:33.167350 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.167615 kubelet[2708]: I1124 06:59:33.167382 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31736d8f-1244-4ceb-aaba-f284117475ca-kubelet-dir\") pod \"csi-node-driver-jmszh\" (UID: \"31736d8f-1244-4ceb-aaba-f284117475ca\") " pod="calico-system/csi-node-driver-jmszh" Nov 24 06:59:33.167615 kubelet[2708]: E1124 06:59:33.167585 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.167615 kubelet[2708]: W1124 06:59:33.167596 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.167861 kubelet[2708]: E1124 06:59:33.167609 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.167861 kubelet[2708]: I1124 06:59:33.167633 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/31736d8f-1244-4ceb-aaba-f284117475ca-registration-dir\") pod \"csi-node-driver-jmszh\" (UID: \"31736d8f-1244-4ceb-aaba-f284117475ca\") " pod="calico-system/csi-node-driver-jmszh" Nov 24 06:59:33.168313 kubelet[2708]: E1124 06:59:33.168118 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.168313 kubelet[2708]: W1124 06:59:33.168133 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.168313 kubelet[2708]: E1124 06:59:33.168180 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.169739 kubelet[2708]: E1124 06:59:33.169066 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.169739 kubelet[2708]: W1124 06:59:33.169081 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.169739 kubelet[2708]: E1124 06:59:33.169093 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.169739 kubelet[2708]: E1124 06:59:33.169474 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.169739 kubelet[2708]: W1124 06:59:33.169483 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.169739 kubelet[2708]: E1124 06:59:33.169495 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.169951 kubelet[2708]: E1124 06:59:33.169762 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.169951 kubelet[2708]: W1124 06:59:33.169774 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.169951 kubelet[2708]: E1124 06:59:33.169790 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.170684 kubelet[2708]: E1124 06:59:33.170492 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.170684 kubelet[2708]: W1124 06:59:33.170506 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.170684 kubelet[2708]: E1124 06:59:33.170516 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.171961 kubelet[2708]: E1124 06:59:33.171759 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.171961 kubelet[2708]: W1124 06:59:33.171778 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.171961 kubelet[2708]: E1124 06:59:33.171792 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.194535 containerd[1532]: time="2025-11-24T06:59:33.194480589Z" level=info msg="connecting to shim 0d9d8726726ab4848598386a1487d3dfdaf0678425f0718ca374cee26d29cecc" address="unix:///run/containerd/s/3fa1fcade70ac385037e084f68a8276dcc50053b35c52c9f07bc375f148774b3" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:59:33.223046 systemd[1]: Started cri-containerd-0d9d8726726ab4848598386a1487d3dfdaf0678425f0718ca374cee26d29cecc.scope - libcontainer container 0d9d8726726ab4848598386a1487d3dfdaf0678425f0718ca374cee26d29cecc. Nov 24 06:59:33.268646 kubelet[2708]: E1124 06:59:33.268607 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.268959 kubelet[2708]: W1124 06:59:33.268785 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.268959 kubelet[2708]: E1124 06:59:33.268819 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.269804 kubelet[2708]: E1124 06:59:33.269770 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.269990 kubelet[2708]: W1124 06:59:33.269884 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.269990 kubelet[2708]: E1124 06:59:33.269911 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.271032 kubelet[2708]: E1124 06:59:33.270933 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.271032 kubelet[2708]: W1124 06:59:33.270965 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.271032 kubelet[2708]: E1124 06:59:33.271004 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.271406 kubelet[2708]: E1124 06:59:33.271328 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.271406 kubelet[2708]: W1124 06:59:33.271349 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.271406 kubelet[2708]: E1124 06:59:33.271365 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.271734 kubelet[2708]: E1124 06:59:33.271621 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.271734 kubelet[2708]: W1124 06:59:33.271636 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.271734 kubelet[2708]: E1124 06:59:33.271649 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.272013 kubelet[2708]: E1124 06:59:33.271816 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.272013 kubelet[2708]: W1124 06:59:33.271823 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.272013 kubelet[2708]: E1124 06:59:33.271831 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.272013 kubelet[2708]: E1124 06:59:33.272046 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.272013 kubelet[2708]: W1124 06:59:33.272054 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.272013 kubelet[2708]: E1124 06:59:33.272063 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.272013 kubelet[2708]: E1124 06:59:33.272235 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.272013 kubelet[2708]: W1124 06:59:33.272245 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.272013 kubelet[2708]: E1124 06:59:33.272257 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.272013 kubelet[2708]: E1124 06:59:33.272443 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.273002 kubelet[2708]: W1124 06:59:33.272451 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.273002 kubelet[2708]: E1124 06:59:33.272460 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.273835 kubelet[2708]: E1124 06:59:33.273811 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.273835 kubelet[2708]: W1124 06:59:33.273833 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.273948 kubelet[2708]: E1124 06:59:33.273852 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.274354 kubelet[2708]: E1124 06:59:33.274175 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.274354 kubelet[2708]: W1124 06:59:33.274189 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.274354 kubelet[2708]: E1124 06:59:33.274247 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.274672 kubelet[2708]: E1124 06:59:33.274648 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.274672 kubelet[2708]: W1124 06:59:33.274668 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.275108 kubelet[2708]: E1124 06:59:33.274684 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.275463 kubelet[2708]: E1124 06:59:33.275154 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.275463 kubelet[2708]: W1124 06:59:33.275169 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.275463 kubelet[2708]: E1124 06:59:33.275186 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.275928 kubelet[2708]: E1124 06:59:33.275632 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.275928 kubelet[2708]: W1124 06:59:33.275642 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.275928 kubelet[2708]: E1124 06:59:33.275654 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.275928 kubelet[2708]: E1124 06:59:33.275880 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.275928 kubelet[2708]: W1124 06:59:33.275891 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.275928 kubelet[2708]: E1124 06:59:33.275904 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.276129 kubelet[2708]: E1124 06:59:33.276077 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.276129 kubelet[2708]: W1124 06:59:33.276087 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.276129 kubelet[2708]: E1124 06:59:33.276099 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.277133 kubelet[2708]: E1124 06:59:33.276321 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.277133 kubelet[2708]: W1124 06:59:33.276340 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.277133 kubelet[2708]: E1124 06:59:33.276355 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.277846 kubelet[2708]: E1124 06:59:33.277829 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.278305 kubelet[2708]: W1124 06:59:33.278285 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.278550 kubelet[2708]: E1124 06:59:33.278463 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.279073 kubelet[2708]: E1124 06:59:33.278943 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.279073 kubelet[2708]: W1124 06:59:33.278957 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.279309 kubelet[2708]: E1124 06:59:33.279219 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.279905 kubelet[2708]: E1124 06:59:33.279803 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.279905 kubelet[2708]: W1124 06:59:33.279816 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.279905 kubelet[2708]: E1124 06:59:33.279828 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.280326 kubelet[2708]: E1124 06:59:33.280293 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.280506 kubelet[2708]: W1124 06:59:33.280408 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.280506 kubelet[2708]: E1124 06:59:33.280430 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.280980 kubelet[2708]: E1124 06:59:33.280935 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.280980 kubelet[2708]: W1124 06:59:33.280948 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.280980 kubelet[2708]: E1124 06:59:33.280959 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.281753 kubelet[2708]: E1124 06:59:33.281681 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.281753 kubelet[2708]: W1124 06:59:33.281698 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.281948 kubelet[2708]: E1124 06:59:33.281842 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.282300 kubelet[2708]: E1124 06:59:33.282274 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.282388 kubelet[2708]: W1124 06:59:33.282352 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.282596 kubelet[2708]: E1124 06:59:33.282434 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.283757 containerd[1532]: time="2025-11-24T06:59:33.283312026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wctnl,Uid:86423dc1-c6e7-4ff3-9eb6-3daa19ee3f80,Namespace:calico-system,Attempt:0,} returns sandbox id \"0d9d8726726ab4848598386a1487d3dfdaf0678425f0718ca374cee26d29cecc\"" Nov 24 06:59:33.283947 kubelet[2708]: E1124 06:59:33.283888 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.283947 kubelet[2708]: W1124 06:59:33.283903 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.283947 kubelet[2708]: E1124 06:59:33.283918 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:33.285146 kubelet[2708]: E1124 06:59:33.285094 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:33.292998 kubelet[2708]: E1124 06:59:33.292961 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:33.293229 kubelet[2708]: W1124 06:59:33.293157 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:33.293229 kubelet[2708]: E1124 06:59:33.293188 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:34.481560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488995706.mount: Deactivated successfully. Nov 24 06:59:34.919961 kubelet[2708]: E1124 06:59:34.919878 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 06:59:35.969561 containerd[1532]: time="2025-11-24T06:59:35.969026539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:35.970244 containerd[1532]: time="2025-11-24T06:59:35.969860501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 24 06:59:35.971991 containerd[1532]: time="2025-11-24T06:59:35.971255477Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:35.976653 containerd[1532]: time="2025-11-24T06:59:35.976499838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:35.978141 containerd[1532]: time="2025-11-24T06:59:35.978079798Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.850626495s" Nov 24 06:59:35.978141 containerd[1532]: time="2025-11-24T06:59:35.978138252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 24 06:59:35.982272 containerd[1532]: time="2025-11-24T06:59:35.981933121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 24 06:59:36.017990 containerd[1532]: time="2025-11-24T06:59:36.017760923Z" level=info msg="CreateContainer within sandbox \"22c24b23c3322f59b34ded4f75e7dda459552a510ead99d50be9c46aa1a6452a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 24 06:59:36.032808 containerd[1532]: time="2025-11-24T06:59:36.030005599Z" level=info msg="Container 2e1da810b14c684527c5084a599f7cf26813c7f7b3978219b4a1e034e79f133f: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:59:36.037543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505100931.mount: Deactivated successfully. Nov 24 06:59:36.042218 containerd[1532]: time="2025-11-24T06:59:36.042157849Z" level=info msg="CreateContainer within sandbox \"22c24b23c3322f59b34ded4f75e7dda459552a510ead99d50be9c46aa1a6452a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2e1da810b14c684527c5084a599f7cf26813c7f7b3978219b4a1e034e79f133f\"" Nov 24 06:59:36.043604 containerd[1532]: time="2025-11-24T06:59:36.043248081Z" level=info msg="StartContainer for \"2e1da810b14c684527c5084a599f7cf26813c7f7b3978219b4a1e034e79f133f\"" Nov 24 06:59:36.044396 containerd[1532]: time="2025-11-24T06:59:36.044363274Z" level=info msg="connecting to shim 2e1da810b14c684527c5084a599f7cf26813c7f7b3978219b4a1e034e79f133f" address="unix:///run/containerd/s/bf70f3fb23469824dd4f749b24f8275710269935acd42d8f842b87d83062d3da" protocol=ttrpc version=3 Nov 24 06:59:36.081031 systemd[1]: Started cri-containerd-2e1da810b14c684527c5084a599f7cf26813c7f7b3978219b4a1e034e79f133f.scope - libcontainer container 2e1da810b14c684527c5084a599f7cf26813c7f7b3978219b4a1e034e79f133f. Nov 24 06:59:36.156760 containerd[1532]: time="2025-11-24T06:59:36.156356804Z" level=info msg="StartContainer for \"2e1da810b14c684527c5084a599f7cf26813c7f7b3978219b4a1e034e79f133f\" returns successfully" Nov 24 06:59:36.920061 kubelet[2708]: E1124 06:59:36.919931 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 06:59:37.067028 kubelet[2708]: E1124 06:59:37.066364 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:37.080886 kubelet[2708]: E1124 06:59:37.080851 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.081098 kubelet[2708]: W1124 06:59:37.081074 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.081187 kubelet[2708]: E1124 06:59:37.081175 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.081627 kubelet[2708]: E1124 06:59:37.081604 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.081769 kubelet[2708]: W1124 06:59:37.081753 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.081847 kubelet[2708]: E1124 06:59:37.081836 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.082210 kubelet[2708]: E1124 06:59:37.082187 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.082298 kubelet[2708]: W1124 06:59:37.082286 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.082411 kubelet[2708]: E1124 06:59:37.082357 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.082754 kubelet[2708]: E1124 06:59:37.082739 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.082928 kubelet[2708]: W1124 06:59:37.082822 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.082928 kubelet[2708]: E1124 06:59:37.082842 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.083449 kubelet[2708]: E1124 06:59:37.083341 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.083449 kubelet[2708]: W1124 06:59:37.083354 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.083449 kubelet[2708]: E1124 06:59:37.083365 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.083659 kubelet[2708]: E1124 06:59:37.083629 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.083659 kubelet[2708]: W1124 06:59:37.083639 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.083836 kubelet[2708]: E1124 06:59:37.083649 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.084042 kubelet[2708]: E1124 06:59:37.084018 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.084191 kubelet[2708]: W1124 06:59:37.084119 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.084191 kubelet[2708]: E1124 06:59:37.084145 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.084534 kubelet[2708]: E1124 06:59:37.084491 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.084534 kubelet[2708]: W1124 06:59:37.084502 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.084534 kubelet[2708]: E1124 06:59:37.084514 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.084893 kubelet[2708]: E1124 06:59:37.084867 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.084893 kubelet[2708]: W1124 06:59:37.084878 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.085017 kubelet[2708]: E1124 06:59:37.084979 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.085305 kubelet[2708]: E1124 06:59:37.085242 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.085305 kubelet[2708]: W1124 06:59:37.085254 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.085305 kubelet[2708]: E1124 06:59:37.085264 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.085672 kubelet[2708]: E1124 06:59:37.085607 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.085672 kubelet[2708]: W1124 06:59:37.085618 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.085672 kubelet[2708]: E1124 06:59:37.085628 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.086186 kubelet[2708]: E1124 06:59:37.086165 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.086246 kubelet[2708]: W1124 06:59:37.086187 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.086246 kubelet[2708]: E1124 06:59:37.086206 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.087636 kubelet[2708]: E1124 06:59:37.087076 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.087636 kubelet[2708]: W1124 06:59:37.087096 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.087636 kubelet[2708]: E1124 06:59:37.087110 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.087636 kubelet[2708]: E1124 06:59:37.087382 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.087636 kubelet[2708]: W1124 06:59:37.087394 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.087636 kubelet[2708]: E1124 06:59:37.087409 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.088098 kubelet[2708]: E1124 06:59:37.087869 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.088098 kubelet[2708]: W1124 06:59:37.087884 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.088098 kubelet[2708]: E1124 06:59:37.087897 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.090383 kubelet[2708]: I1124 06:59:37.090306 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fb75f966d-j5mcc" podStartSLOduration=2.235997673 podStartE2EDuration="5.090285929s" podCreationTimestamp="2025-11-24 06:59:32 +0000 UTC" firstStartedPulling="2025-11-24 06:59:33.126769005 +0000 UTC m=+23.405716261" lastFinishedPulling="2025-11-24 06:59:35.981057262 +0000 UTC m=+26.260004517" observedRunningTime="2025-11-24 06:59:37.088476051 +0000 UTC m=+27.367423315" watchObservedRunningTime="2025-11-24 06:59:37.090285929 +0000 UTC m=+27.369233197" Nov 24 06:59:37.107126 kubelet[2708]: E1124 06:59:37.107032 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.107126 kubelet[2708]: W1124 06:59:37.107061 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.107126 kubelet[2708]: E1124 06:59:37.107092 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.107672 kubelet[2708]: E1124 06:59:37.107644 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.107846 kubelet[2708]: W1124 06:59:37.107658 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.107846 kubelet[2708]: E1124 06:59:37.107789 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.108538 kubelet[2708]: E1124 06:59:37.108500 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.108768 kubelet[2708]: W1124 06:59:37.108532 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.108824 kubelet[2708]: E1124 06:59:37.108779 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.110892 kubelet[2708]: E1124 06:59:37.110856 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.110892 kubelet[2708]: W1124 06:59:37.110883 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.111093 kubelet[2708]: E1124 06:59:37.110907 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.111223 kubelet[2708]: E1124 06:59:37.111203 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.111266 kubelet[2708]: W1124 06:59:37.111222 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.111266 kubelet[2708]: E1124 06:59:37.111244 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.111592 kubelet[2708]: E1124 06:59:37.111576 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.111628 kubelet[2708]: W1124 06:59:37.111594 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.111744 kubelet[2708]: E1124 06:59:37.111632 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.112003 kubelet[2708]: E1124 06:59:37.111986 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.112142 kubelet[2708]: W1124 06:59:37.112001 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.112142 kubelet[2708]: E1124 06:59:37.112032 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.112661 kubelet[2708]: E1124 06:59:37.112644 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.112844 kubelet[2708]: W1124 06:59:37.112827 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.112900 kubelet[2708]: E1124 06:59:37.112851 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.113375 kubelet[2708]: E1124 06:59:37.113277 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.113375 kubelet[2708]: W1124 06:59:37.113292 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.113375 kubelet[2708]: E1124 06:59:37.113304 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.113742 kubelet[2708]: E1124 06:59:37.113645 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.113742 kubelet[2708]: W1124 06:59:37.113657 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.113742 kubelet[2708]: E1124 06:59:37.113668 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.114088 kubelet[2708]: E1124 06:59:37.114000 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.114088 kubelet[2708]: W1124 06:59:37.114013 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.114088 kubelet[2708]: E1124 06:59:37.114023 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.114538 kubelet[2708]: E1124 06:59:37.114498 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.114538 kubelet[2708]: W1124 06:59:37.114511 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.114538 kubelet[2708]: E1124 06:59:37.114521 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.115221 kubelet[2708]: E1124 06:59:37.115076 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.115221 kubelet[2708]: W1124 06:59:37.115087 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.115221 kubelet[2708]: E1124 06:59:37.115097 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.115566 kubelet[2708]: E1124 06:59:37.115555 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.115624 kubelet[2708]: W1124 06:59:37.115615 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.115674 kubelet[2708]: E1124 06:59:37.115665 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.116038 kubelet[2708]: E1124 06:59:37.115914 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.116038 kubelet[2708]: W1124 06:59:37.115931 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.116038 kubelet[2708]: E1124 06:59:37.115944 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.116327 kubelet[2708]: E1124 06:59:37.116313 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.116755 kubelet[2708]: W1124 06:59:37.116526 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.116755 kubelet[2708]: E1124 06:59:37.116551 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.117083 kubelet[2708]: E1124 06:59:37.117067 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.117221 kubelet[2708]: W1124 06:59:37.117163 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.117221 kubelet[2708]: E1124 06:59:37.117183 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.118390 kubelet[2708]: E1124 06:59:37.118188 2708 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:59:37.118390 kubelet[2708]: W1124 06:59:37.118385 2708 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:59:37.118532 kubelet[2708]: E1124 06:59:37.118400 2708 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:59:37.224217 containerd[1532]: time="2025-11-24T06:59:37.224055815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:37.225978 containerd[1532]: time="2025-11-24T06:59:37.225892729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 24 06:59:37.227442 containerd[1532]: time="2025-11-24T06:59:37.227129076Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:37.232277 containerd[1532]: time="2025-11-24T06:59:37.232213487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:37.233413 containerd[1532]: time="2025-11-24T06:59:37.233270706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.251003163s" Nov 24 06:59:37.233413 containerd[1532]: time="2025-11-24T06:59:37.233308143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 24 06:59:37.240753 containerd[1532]: time="2025-11-24T06:59:37.240004255Z" level=info msg="CreateContainer within sandbox \"0d9d8726726ab4848598386a1487d3dfdaf0678425f0718ca374cee26d29cecc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 24 06:59:37.275105 containerd[1532]: time="2025-11-24T06:59:37.275050220Z" level=info msg="Container 1215b66ac58c9d93b59b5f796a7f38dab2676176e12dcd68aef8f1ff6cbe6f60: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:59:37.281508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2135777428.mount: Deactivated successfully. Nov 24 06:59:37.292046 containerd[1532]: time="2025-11-24T06:59:37.291978977Z" level=info msg="CreateContainer within sandbox \"0d9d8726726ab4848598386a1487d3dfdaf0678425f0718ca374cee26d29cecc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1215b66ac58c9d93b59b5f796a7f38dab2676176e12dcd68aef8f1ff6cbe6f60\"" Nov 24 06:59:37.293039 containerd[1532]: time="2025-11-24T06:59:37.292923812Z" level=info msg="StartContainer for \"1215b66ac58c9d93b59b5f796a7f38dab2676176e12dcd68aef8f1ff6cbe6f60\"" Nov 24 06:59:37.294927 containerd[1532]: time="2025-11-24T06:59:37.294897584Z" level=info msg="connecting to shim 1215b66ac58c9d93b59b5f796a7f38dab2676176e12dcd68aef8f1ff6cbe6f60" address="unix:///run/containerd/s/3fa1fcade70ac385037e084f68a8276dcc50053b35c52c9f07bc375f148774b3" protocol=ttrpc version=3 Nov 24 06:59:37.325955 systemd[1]: Started cri-containerd-1215b66ac58c9d93b59b5f796a7f38dab2676176e12dcd68aef8f1ff6cbe6f60.scope - libcontainer container 1215b66ac58c9d93b59b5f796a7f38dab2676176e12dcd68aef8f1ff6cbe6f60. Nov 24 06:59:37.397301 containerd[1532]: time="2025-11-24T06:59:37.397251278Z" level=info msg="StartContainer for \"1215b66ac58c9d93b59b5f796a7f38dab2676176e12dcd68aef8f1ff6cbe6f60\" returns successfully" Nov 24 06:59:37.418798 systemd[1]: cri-containerd-1215b66ac58c9d93b59b5f796a7f38dab2676176e12dcd68aef8f1ff6cbe6f60.scope: Deactivated successfully. Nov 24 06:59:37.460531 containerd[1532]: time="2025-11-24T06:59:37.460421778Z" level=info msg="received container exit event container_id:\"1215b66ac58c9d93b59b5f796a7f38dab2676176e12dcd68aef8f1ff6cbe6f60\" id:\"1215b66ac58c9d93b59b5f796a7f38dab2676176e12dcd68aef8f1ff6cbe6f60\" pid:3411 exited_at:{seconds:1763967577 nanos:424107786}" Nov 24 06:59:37.509184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1215b66ac58c9d93b59b5f796a7f38dab2676176e12dcd68aef8f1ff6cbe6f60-rootfs.mount: Deactivated successfully. Nov 24 06:59:38.071317 kubelet[2708]: I1124 06:59:38.071277 2708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 06:59:38.075398 kubelet[2708]: E1124 06:59:38.074029 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:38.075398 kubelet[2708]: E1124 06:59:38.071813 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:38.077831 containerd[1532]: time="2025-11-24T06:59:38.076799962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 24 06:59:38.920817 kubelet[2708]: E1124 06:59:38.920642 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 06:59:40.920433 kubelet[2708]: E1124 06:59:40.920331 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 06:59:42.920446 kubelet[2708]: E1124 06:59:42.920330 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 06:59:43.039075 containerd[1532]: time="2025-11-24T06:59:43.039023078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:43.040424 containerd[1532]: time="2025-11-24T06:59:43.040368780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 24 06:59:43.040944 containerd[1532]: time="2025-11-24T06:59:43.040915964Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:43.043647 containerd[1532]: time="2025-11-24T06:59:43.043601437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:43.044902 containerd[1532]: time="2025-11-24T06:59:43.044858829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.968019731s" Nov 24 06:59:43.044902 containerd[1532]: time="2025-11-24T06:59:43.044894375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 24 06:59:43.049290 containerd[1532]: time="2025-11-24T06:59:43.049243464Z" level=info msg="CreateContainer within sandbox \"0d9d8726726ab4848598386a1487d3dfdaf0678425f0718ca374cee26d29cecc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 24 06:59:43.059931 containerd[1532]: time="2025-11-24T06:59:43.059882885Z" level=info msg="Container 5e7f1eb744121a421f0b059637ed82422db07231e8787c981f8039b9f07252f8: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:59:43.108290 containerd[1532]: time="2025-11-24T06:59:43.107986462Z" level=info msg="CreateContainer within sandbox \"0d9d8726726ab4848598386a1487d3dfdaf0678425f0718ca374cee26d29cecc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5e7f1eb744121a421f0b059637ed82422db07231e8787c981f8039b9f07252f8\"" Nov 24 06:59:43.109692 containerd[1532]: time="2025-11-24T06:59:43.109629257Z" level=info msg="StartContainer for \"5e7f1eb744121a421f0b059637ed82422db07231e8787c981f8039b9f07252f8\"" Nov 24 06:59:43.115021 containerd[1532]: time="2025-11-24T06:59:43.114979016Z" level=info msg="connecting to shim 5e7f1eb744121a421f0b059637ed82422db07231e8787c981f8039b9f07252f8" address="unix:///run/containerd/s/3fa1fcade70ac385037e084f68a8276dcc50053b35c52c9f07bc375f148774b3" protocol=ttrpc version=3 Nov 24 06:59:43.155083 systemd[1]: Started cri-containerd-5e7f1eb744121a421f0b059637ed82422db07231e8787c981f8039b9f07252f8.scope - libcontainer container 5e7f1eb744121a421f0b059637ed82422db07231e8787c981f8039b9f07252f8. Nov 24 06:59:43.253729 containerd[1532]: time="2025-11-24T06:59:43.253241849Z" level=info msg="StartContainer for \"5e7f1eb744121a421f0b059637ed82422db07231e8787c981f8039b9f07252f8\" returns successfully" Nov 24 06:59:43.892732 systemd[1]: cri-containerd-5e7f1eb744121a421f0b059637ed82422db07231e8787c981f8039b9f07252f8.scope: Deactivated successfully. Nov 24 06:59:43.894417 systemd[1]: cri-containerd-5e7f1eb744121a421f0b059637ed82422db07231e8787c981f8039b9f07252f8.scope: Consumed 658ms CPU time, 165.6M memory peak, 14.2M read from disk, 171.3M written to disk. Nov 24 06:59:43.914033 containerd[1532]: time="2025-11-24T06:59:43.913859101Z" level=info msg="received container exit event container_id:\"5e7f1eb744121a421f0b059637ed82422db07231e8787c981f8039b9f07252f8\" id:\"5e7f1eb744121a421f0b059637ed82422db07231e8787c981f8039b9f07252f8\" pid:3469 exited_at:{seconds:1763967583 nanos:913470789}" Nov 24 06:59:43.966828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e7f1eb744121a421f0b059637ed82422db07231e8787c981f8039b9f07252f8-rootfs.mount: Deactivated successfully. Nov 24 06:59:43.973044 kubelet[2708]: I1124 06:59:43.972982 2708 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 24 06:59:44.045565 systemd[1]: Created slice kubepods-burstable-podf6d6def7_1dbb_4073_936b_e22bda94a97c.slice - libcontainer container kubepods-burstable-podf6d6def7_1dbb_4073_936b_e22bda94a97c.slice. Nov 24 06:59:44.061159 kubelet[2708]: I1124 06:59:44.061103 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-792tc\" (UniqueName: \"kubernetes.io/projected/f6d6def7-1dbb-4073-936b-e22bda94a97c-kube-api-access-792tc\") pod \"coredns-66bc5c9577-v4mwq\" (UID: \"f6d6def7-1dbb-4073-936b-e22bda94a97c\") " pod="kube-system/coredns-66bc5c9577-v4mwq" Nov 24 06:59:44.061552 kubelet[2708]: I1124 06:59:44.061382 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6d6def7-1dbb-4073-936b-e22bda94a97c-config-volume\") pod \"coredns-66bc5c9577-v4mwq\" (UID: \"f6d6def7-1dbb-4073-936b-e22bda94a97c\") " pod="kube-system/coredns-66bc5c9577-v4mwq" Nov 24 06:59:44.062596 systemd[1]: Created slice kubepods-burstable-pod70e145da_424d_4d20_b7bf_e0cf67bf5a55.slice - libcontainer container kubepods-burstable-pod70e145da_424d_4d20_b7bf_e0cf67bf5a55.slice. Nov 24 06:59:44.073767 systemd[1]: Created slice kubepods-besteffort-pod7e8ad9f8_a5c0_4992_b252_00ca50c053ae.slice - libcontainer container kubepods-besteffort-pod7e8ad9f8_a5c0_4992_b252_00ca50c053ae.slice. Nov 24 06:59:44.089005 systemd[1]: Created slice kubepods-besteffort-pode6c5825d_6ef5_4e3c_a5de_981230cfd835.slice - libcontainer container kubepods-besteffort-pode6c5825d_6ef5_4e3c_a5de_981230cfd835.slice. Nov 24 06:59:44.098581 systemd[1]: Created slice kubepods-besteffort-pod9a41cd22_6538_4edb_a2fc_fe53fb988efb.slice - libcontainer container kubepods-besteffort-pod9a41cd22_6538_4edb_a2fc_fe53fb988efb.slice. Nov 24 06:59:44.112650 systemd[1]: Created slice kubepods-besteffort-pod9436af66_5c36_4d0f_a831_5035675bad6c.slice - libcontainer container kubepods-besteffort-pod9436af66_5c36_4d0f_a831_5035675bad6c.slice. Nov 24 06:59:44.126041 systemd[1]: Created slice kubepods-besteffort-pod65e83e8e_3217_4c5d_9dc8_a1e6cab084a7.slice - libcontainer container kubepods-besteffort-pod65e83e8e_3217_4c5d_9dc8_a1e6cab084a7.slice. Nov 24 06:59:44.139251 systemd[1]: Created slice kubepods-besteffort-pod688d478f_9397_4780_ac83_825ed42a52b7.slice - libcontainer container kubepods-besteffort-pod688d478f_9397_4780_ac83_825ed42a52b7.slice. Nov 24 06:59:44.150084 kubelet[2708]: E1124 06:59:44.147700 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:44.162618 kubelet[2708]: I1124 06:59:44.162424 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70e145da-424d-4d20-b7bf-e0cf67bf5a55-config-volume\") pod \"coredns-66bc5c9577-kv44w\" (UID: \"70e145da-424d-4d20-b7bf-e0cf67bf5a55\") " pod="kube-system/coredns-66bc5c9577-kv44w" Nov 24 06:59:44.163089 kubelet[2708]: I1124 06:59:44.162972 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9436af66-5c36-4d0f-a831-5035675bad6c-whisker-backend-key-pair\") pod \"whisker-5767f48f6c-fl897\" (UID: \"9436af66-5c36-4d0f-a831-5035675bad6c\") " pod="calico-system/whisker-5767f48f6c-fl897" Nov 24 06:59:44.163304 kubelet[2708]: I1124 06:59:44.163249 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e6c5825d-6ef5-4e3c-a5de-981230cfd835-calico-apiserver-certs\") pod \"calico-apiserver-c87b844f6-pjvfq\" (UID: \"e6c5825d-6ef5-4e3c-a5de-981230cfd835\") " pod="calico-apiserver/calico-apiserver-c87b844f6-pjvfq" Nov 24 06:59:44.163446 kubelet[2708]: I1124 06:59:44.163369 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/688d478f-9397-4780-ac83-825ed42a52b7-calico-apiserver-certs\") pod \"calico-apiserver-79c5bccf9c-vgh4k\" (UID: \"688d478f-9397-4780-ac83-825ed42a52b7\") " pod="calico-apiserver/calico-apiserver-79c5bccf9c-vgh4k" Nov 24 06:59:44.163543 kubelet[2708]: I1124 06:59:44.163525 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a41cd22-6538-4edb-a2fc-fe53fb988efb-tigera-ca-bundle\") pod \"calico-kube-controllers-589b8dfb96-w2xrv\" (UID: \"9a41cd22-6538-4edb-a2fc-fe53fb988efb\") " pod="calico-system/calico-kube-controllers-589b8dfb96-w2xrv" Nov 24 06:59:44.163892 kubelet[2708]: I1124 06:59:44.163864 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/65e83e8e-3217-4c5d-9dc8-a1e6cab084a7-goldmane-key-pair\") pod \"goldmane-7c778bb748-xx5mj\" (UID: \"65e83e8e-3217-4c5d-9dc8-a1e6cab084a7\") " pod="calico-system/goldmane-7c778bb748-xx5mj" Nov 24 06:59:44.163986 kubelet[2708]: I1124 06:59:44.163975 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e8ad9f8-a5c0-4992-b252-00ca50c053ae-calico-apiserver-certs\") pod \"calico-apiserver-79c5bccf9c-g8247\" (UID: \"7e8ad9f8-a5c0-4992-b252-00ca50c053ae\") " pod="calico-apiserver/calico-apiserver-79c5bccf9c-g8247" Nov 24 06:59:44.165484 kubelet[2708]: I1124 06:59:44.165432 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhhvj\" (UniqueName: \"kubernetes.io/projected/688d478f-9397-4780-ac83-825ed42a52b7-kube-api-access-vhhvj\") pod \"calico-apiserver-79c5bccf9c-vgh4k\" (UID: \"688d478f-9397-4780-ac83-825ed42a52b7\") " pod="calico-apiserver/calico-apiserver-79c5bccf9c-vgh4k" Nov 24 06:59:44.166395 kubelet[2708]: I1124 06:59:44.165659 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn4md\" (UniqueName: \"kubernetes.io/projected/9a41cd22-6538-4edb-a2fc-fe53fb988efb-kube-api-access-nn4md\") pod \"calico-kube-controllers-589b8dfb96-w2xrv\" (UID: \"9a41cd22-6538-4edb-a2fc-fe53fb988efb\") " pod="calico-system/calico-kube-controllers-589b8dfb96-w2xrv" Nov 24 06:59:44.166395 kubelet[2708]: I1124 06:59:44.165699 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wmph\" (UniqueName: \"kubernetes.io/projected/9436af66-5c36-4d0f-a831-5035675bad6c-kube-api-access-4wmph\") pod \"whisker-5767f48f6c-fl897\" (UID: \"9436af66-5c36-4d0f-a831-5035675bad6c\") " pod="calico-system/whisker-5767f48f6c-fl897" Nov 24 06:59:44.166395 kubelet[2708]: I1124 06:59:44.165742 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65e83e8e-3217-4c5d-9dc8-a1e6cab084a7-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-xx5mj\" (UID: \"65e83e8e-3217-4c5d-9dc8-a1e6cab084a7\") " pod="calico-system/goldmane-7c778bb748-xx5mj" Nov 24 06:59:44.166395 kubelet[2708]: I1124 06:59:44.165779 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp2hs\" (UniqueName: \"kubernetes.io/projected/7e8ad9f8-a5c0-4992-b252-00ca50c053ae-kube-api-access-kp2hs\") pod \"calico-apiserver-79c5bccf9c-g8247\" (UID: \"7e8ad9f8-a5c0-4992-b252-00ca50c053ae\") " pod="calico-apiserver/calico-apiserver-79c5bccf9c-g8247" Nov 24 06:59:44.166395 kubelet[2708]: I1124 06:59:44.165819 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chbbk\" (UniqueName: \"kubernetes.io/projected/70e145da-424d-4d20-b7bf-e0cf67bf5a55-kube-api-access-chbbk\") pod \"coredns-66bc5c9577-kv44w\" (UID: \"70e145da-424d-4d20-b7bf-e0cf67bf5a55\") " pod="kube-system/coredns-66bc5c9577-kv44w" Nov 24 06:59:44.166646 kubelet[2708]: I1124 06:59:44.165848 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9436af66-5c36-4d0f-a831-5035675bad6c-whisker-ca-bundle\") pod \"whisker-5767f48f6c-fl897\" (UID: \"9436af66-5c36-4d0f-a831-5035675bad6c\") " pod="calico-system/whisker-5767f48f6c-fl897" Nov 24 06:59:44.166646 kubelet[2708]: I1124 06:59:44.165863 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65e83e8e-3217-4c5d-9dc8-a1e6cab084a7-config\") pod \"goldmane-7c778bb748-xx5mj\" (UID: \"65e83e8e-3217-4c5d-9dc8-a1e6cab084a7\") " pod="calico-system/goldmane-7c778bb748-xx5mj" Nov 24 06:59:44.166646 kubelet[2708]: I1124 06:59:44.165879 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bppch\" (UniqueName: \"kubernetes.io/projected/65e83e8e-3217-4c5d-9dc8-a1e6cab084a7-kube-api-access-bppch\") pod \"goldmane-7c778bb748-xx5mj\" (UID: \"65e83e8e-3217-4c5d-9dc8-a1e6cab084a7\") " pod="calico-system/goldmane-7c778bb748-xx5mj" Nov 24 06:59:44.166646 kubelet[2708]: I1124 06:59:44.165926 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvslf\" (UniqueName: \"kubernetes.io/projected/e6c5825d-6ef5-4e3c-a5de-981230cfd835-kube-api-access-tvslf\") pod \"calico-apiserver-c87b844f6-pjvfq\" (UID: \"e6c5825d-6ef5-4e3c-a5de-981230cfd835\") " pod="calico-apiserver/calico-apiserver-c87b844f6-pjvfq" Nov 24 06:59:44.181859 containerd[1532]: time="2025-11-24T06:59:44.181804330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 24 06:59:44.354116 kubelet[2708]: E1124 06:59:44.354054 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:44.356229 containerd[1532]: time="2025-11-24T06:59:44.356191185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v4mwq,Uid:f6d6def7-1dbb-4073-936b-e22bda94a97c,Namespace:kube-system,Attempt:0,}" Nov 24 06:59:44.375044 kubelet[2708]: E1124 06:59:44.371631 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:44.375198 containerd[1532]: time="2025-11-24T06:59:44.374403591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kv44w,Uid:70e145da-424d-4d20-b7bf-e0cf67bf5a55,Namespace:kube-system,Attempt:0,}" Nov 24 06:59:44.402232 containerd[1532]: time="2025-11-24T06:59:44.401982081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5bccf9c-g8247,Uid:7e8ad9f8-a5c0-4992-b252-00ca50c053ae,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:59:44.407074 containerd[1532]: time="2025-11-24T06:59:44.407018992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c87b844f6-pjvfq,Uid:e6c5825d-6ef5-4e3c-a5de-981230cfd835,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:59:44.453124 containerd[1532]: time="2025-11-24T06:59:44.453077755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5bccf9c-vgh4k,Uid:688d478f-9397-4780-ac83-825ed42a52b7,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:59:44.459214 containerd[1532]: time="2025-11-24T06:59:44.459168855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589b8dfb96-w2xrv,Uid:9a41cd22-6538-4edb-a2fc-fe53fb988efb,Namespace:calico-system,Attempt:0,}" Nov 24 06:59:44.459665 containerd[1532]: time="2025-11-24T06:59:44.459615927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5767f48f6c-fl897,Uid:9436af66-5c36-4d0f-a831-5035675bad6c,Namespace:calico-system,Attempt:0,}" Nov 24 06:59:44.461031 containerd[1532]: time="2025-11-24T06:59:44.460963468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xx5mj,Uid:65e83e8e-3217-4c5d-9dc8-a1e6cab084a7,Namespace:calico-system,Attempt:0,}" Nov 24 06:59:44.758502 containerd[1532]: time="2025-11-24T06:59:44.757506962Z" level=error msg="Failed to destroy network for sandbox \"dba1427de24492ed371ba805b8dfefdbddab3161d4a6cc1b44c04e2ae1463c9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.758699 containerd[1532]: time="2025-11-24T06:59:44.758594408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xx5mj,Uid:65e83e8e-3217-4c5d-9dc8-a1e6cab084a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dba1427de24492ed371ba805b8dfefdbddab3161d4a6cc1b44c04e2ae1463c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.760685 containerd[1532]: time="2025-11-24T06:59:44.760270665Z" level=error msg="Failed to destroy network for sandbox \"8dfb3fa9954c7c3e6448dcb93bc0accff416b6934cf309b6c0a7e580fccbe8e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.760819 kubelet[2708]: E1124 06:59:44.760327 2708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dba1427de24492ed371ba805b8dfefdbddab3161d4a6cc1b44c04e2ae1463c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.760819 kubelet[2708]: E1124 06:59:44.760533 2708 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dba1427de24492ed371ba805b8dfefdbddab3161d4a6cc1b44c04e2ae1463c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xx5mj" Nov 24 06:59:44.761526 kubelet[2708]: E1124 06:59:44.761084 2708 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dba1427de24492ed371ba805b8dfefdbddab3161d4a6cc1b44c04e2ae1463c9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xx5mj" Nov 24 06:59:44.761997 kubelet[2708]: E1124 06:59:44.761474 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-xx5mj_calico-system(65e83e8e-3217-4c5d-9dc8-a1e6cab084a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-xx5mj_calico-system(65e83e8e-3217-4c5d-9dc8-a1e6cab084a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dba1427de24492ed371ba805b8dfefdbddab3161d4a6cc1b44c04e2ae1463c9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-xx5mj" podUID="65e83e8e-3217-4c5d-9dc8-a1e6cab084a7" Nov 24 06:59:44.775040 containerd[1532]: time="2025-11-24T06:59:44.774451843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5bccf9c-vgh4k,Uid:688d478f-9397-4780-ac83-825ed42a52b7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dfb3fa9954c7c3e6448dcb93bc0accff416b6934cf309b6c0a7e580fccbe8e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.776285 kubelet[2708]: E1124 06:59:44.775919 2708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dfb3fa9954c7c3e6448dcb93bc0accff416b6934cf309b6c0a7e580fccbe8e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.776285 kubelet[2708]: E1124 06:59:44.775998 2708 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dfb3fa9954c7c3e6448dcb93bc0accff416b6934cf309b6c0a7e580fccbe8e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c5bccf9c-vgh4k" Nov 24 06:59:44.776285 kubelet[2708]: E1124 06:59:44.776022 2708 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dfb3fa9954c7c3e6448dcb93bc0accff416b6934cf309b6c0a7e580fccbe8e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c5bccf9c-vgh4k" Nov 24 06:59:44.776686 kubelet[2708]: E1124 06:59:44.776120 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79c5bccf9c-vgh4k_calico-apiserver(688d478f-9397-4780-ac83-825ed42a52b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79c5bccf9c-vgh4k_calico-apiserver(688d478f-9397-4780-ac83-825ed42a52b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8dfb3fa9954c7c3e6448dcb93bc0accff416b6934cf309b6c0a7e580fccbe8e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-vgh4k" podUID="688d478f-9397-4780-ac83-825ed42a52b7" Nov 24 06:59:44.824907 containerd[1532]: time="2025-11-24T06:59:44.824761436Z" level=error msg="Failed to destroy network for sandbox \"3c64f275a2b9c83a53471c7b372c622ba2468c21dff31baa04f74f5a59685140\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.826551 containerd[1532]: time="2025-11-24T06:59:44.826240528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5767f48f6c-fl897,Uid:9436af66-5c36-4d0f-a831-5035675bad6c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c64f275a2b9c83a53471c7b372c622ba2468c21dff31baa04f74f5a59685140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.828511 kubelet[2708]: E1124 06:59:44.827433 2708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c64f275a2b9c83a53471c7b372c622ba2468c21dff31baa04f74f5a59685140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.828511 kubelet[2708]: E1124 06:59:44.828046 2708 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c64f275a2b9c83a53471c7b372c622ba2468c21dff31baa04f74f5a59685140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5767f48f6c-fl897" Nov 24 06:59:44.828511 kubelet[2708]: E1124 06:59:44.828092 2708 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c64f275a2b9c83a53471c7b372c622ba2468c21dff31baa04f74f5a59685140\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5767f48f6c-fl897" Nov 24 06:59:44.830000 kubelet[2708]: E1124 06:59:44.828219 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5767f48f6c-fl897_calico-system(9436af66-5c36-4d0f-a831-5035675bad6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5767f48f6c-fl897_calico-system(9436af66-5c36-4d0f-a831-5035675bad6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c64f275a2b9c83a53471c7b372c622ba2468c21dff31baa04f74f5a59685140\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5767f48f6c-fl897" podUID="9436af66-5c36-4d0f-a831-5035675bad6c" Nov 24 06:59:44.871854 containerd[1532]: time="2025-11-24T06:59:44.870969762Z" level=error msg="Failed to destroy network for sandbox \"4a6956343fc48deecd1a9662916226b3976a03e2fa902209be0383caa85574d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.872418 containerd[1532]: time="2025-11-24T06:59:44.872174630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v4mwq,Uid:f6d6def7-1dbb-4073-936b-e22bda94a97c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a6956343fc48deecd1a9662916226b3976a03e2fa902209be0383caa85574d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.873382 kubelet[2708]: E1124 06:59:44.872950 2708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a6956343fc48deecd1a9662916226b3976a03e2fa902209be0383caa85574d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.873382 kubelet[2708]: E1124 06:59:44.873006 2708 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a6956343fc48deecd1a9662916226b3976a03e2fa902209be0383caa85574d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-v4mwq" Nov 24 06:59:44.873382 kubelet[2708]: E1124 06:59:44.873026 2708 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a6956343fc48deecd1a9662916226b3976a03e2fa902209be0383caa85574d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-v4mwq" Nov 24 06:59:44.876285 kubelet[2708]: E1124 06:59:44.875971 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-v4mwq_kube-system(f6d6def7-1dbb-4073-936b-e22bda94a97c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-v4mwq_kube-system(f6d6def7-1dbb-4073-936b-e22bda94a97c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a6956343fc48deecd1a9662916226b3976a03e2fa902209be0383caa85574d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-v4mwq" podUID="f6d6def7-1dbb-4073-936b-e22bda94a97c" Nov 24 06:59:44.879086 containerd[1532]: time="2025-11-24T06:59:44.878920015Z" level=error msg="Failed to destroy network for sandbox \"9e1ab97429d2353024048039c759456804417d91a7d92f44d25cee7ffb12adff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.884082 containerd[1532]: time="2025-11-24T06:59:44.883697727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kv44w,Uid:70e145da-424d-4d20-b7bf-e0cf67bf5a55,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e1ab97429d2353024048039c759456804417d91a7d92f44d25cee7ffb12adff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.884794 kubelet[2708]: E1124 06:59:44.884479 2708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e1ab97429d2353024048039c759456804417d91a7d92f44d25cee7ffb12adff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.884794 kubelet[2708]: E1124 06:59:44.884570 2708 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e1ab97429d2353024048039c759456804417d91a7d92f44d25cee7ffb12adff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kv44w" Nov 24 06:59:44.884794 kubelet[2708]: E1124 06:59:44.884599 2708 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e1ab97429d2353024048039c759456804417d91a7d92f44d25cee7ffb12adff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kv44w" Nov 24 06:59:44.885909 kubelet[2708]: E1124 06:59:44.884671 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-kv44w_kube-system(70e145da-424d-4d20-b7bf-e0cf67bf5a55)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-kv44w_kube-system(70e145da-424d-4d20-b7bf-e0cf67bf5a55)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e1ab97429d2353024048039c759456804417d91a7d92f44d25cee7ffb12adff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-kv44w" podUID="70e145da-424d-4d20-b7bf-e0cf67bf5a55" Nov 24 06:59:44.890006 containerd[1532]: time="2025-11-24T06:59:44.889876656Z" level=error msg="Failed to destroy network for sandbox \"c6ec8919e3a0af782f71ead4c2dbd855390cd21fd48425eee1a27d2f9cc265f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.891387 containerd[1532]: time="2025-11-24T06:59:44.891323707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c87b844f6-pjvfq,Uid:e6c5825d-6ef5-4e3c-a5de-981230cfd835,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6ec8919e3a0af782f71ead4c2dbd855390cd21fd48425eee1a27d2f9cc265f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.893036 kubelet[2708]: E1124 06:59:44.892962 2708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6ec8919e3a0af782f71ead4c2dbd855390cd21fd48425eee1a27d2f9cc265f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.893184 kubelet[2708]: E1124 06:59:44.893052 2708 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6ec8919e3a0af782f71ead4c2dbd855390cd21fd48425eee1a27d2f9cc265f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c87b844f6-pjvfq" Nov 24 06:59:44.893184 kubelet[2708]: E1124 06:59:44.893082 2708 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6ec8919e3a0af782f71ead4c2dbd855390cd21fd48425eee1a27d2f9cc265f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c87b844f6-pjvfq" Nov 24 06:59:44.893309 kubelet[2708]: E1124 06:59:44.893169 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c87b844f6-pjvfq_calico-apiserver(e6c5825d-6ef5-4e3c-a5de-981230cfd835)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c87b844f6-pjvfq_calico-apiserver(e6c5825d-6ef5-4e3c-a5de-981230cfd835)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6ec8919e3a0af782f71ead4c2dbd855390cd21fd48425eee1a27d2f9cc265f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c87b844f6-pjvfq" podUID="e6c5825d-6ef5-4e3c-a5de-981230cfd835" Nov 24 06:59:44.928226 containerd[1532]: time="2025-11-24T06:59:44.927596078Z" level=error msg="Failed to destroy network for sandbox \"4d6e808d0d9f2d0036a3c819f72ae9d8f44d28461c691f8a6ff3a0d0e511e4ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.936864 containerd[1532]: time="2025-11-24T06:59:44.936227252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5bccf9c-g8247,Uid:7e8ad9f8-a5c0-4992-b252-00ca50c053ae,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d6e808d0d9f2d0036a3c819f72ae9d8f44d28461c691f8a6ff3a0d0e511e4ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.939851 kubelet[2708]: E1124 06:59:44.939537 2708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d6e808d0d9f2d0036a3c819f72ae9d8f44d28461c691f8a6ff3a0d0e511e4ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.941248 kubelet[2708]: E1124 06:59:44.940806 2708 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d6e808d0d9f2d0036a3c819f72ae9d8f44d28461c691f8a6ff3a0d0e511e4ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c5bccf9c-g8247" Nov 24 06:59:44.941248 kubelet[2708]: E1124 06:59:44.940859 2708 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d6e808d0d9f2d0036a3c819f72ae9d8f44d28461c691f8a6ff3a0d0e511e4ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79c5bccf9c-g8247" Nov 24 06:59:44.942255 kubelet[2708]: E1124 06:59:44.942213 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79c5bccf9c-g8247_calico-apiserver(7e8ad9f8-a5c0-4992-b252-00ca50c053ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79c5bccf9c-g8247_calico-apiserver(7e8ad9f8-a5c0-4992-b252-00ca50c053ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d6e808d0d9f2d0036a3c819f72ae9d8f44d28461c691f8a6ff3a0d0e511e4ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-g8247" podUID="7e8ad9f8-a5c0-4992-b252-00ca50c053ae" Nov 24 06:59:44.944070 systemd[1]: Created slice kubepods-besteffort-pod31736d8f_1244_4ceb_aaba_f284117475ca.slice - libcontainer container kubepods-besteffort-pod31736d8f_1244_4ceb_aaba_f284117475ca.slice. Nov 24 06:59:44.964337 containerd[1532]: time="2025-11-24T06:59:44.963906584Z" level=error msg="Failed to destroy network for sandbox \"f8e414a15ac87e2d0baef44bf8a981ed49b30862b022385b18d327fc31040e73\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.966000 containerd[1532]: time="2025-11-24T06:59:44.965936675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jmszh,Uid:31736d8f-1244-4ceb-aaba-f284117475ca,Namespace:calico-system,Attempt:0,}" Nov 24 06:59:44.967422 containerd[1532]: time="2025-11-24T06:59:44.967259988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589b8dfb96-w2xrv,Uid:9a41cd22-6538-4edb-a2fc-fe53fb988efb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8e414a15ac87e2d0baef44bf8a981ed49b30862b022385b18d327fc31040e73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.968468 kubelet[2708]: E1124 06:59:44.968416 2708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8e414a15ac87e2d0baef44bf8a981ed49b30862b022385b18d327fc31040e73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:44.968614 kubelet[2708]: E1124 06:59:44.968490 2708 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8e414a15ac87e2d0baef44bf8a981ed49b30862b022385b18d327fc31040e73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589b8dfb96-w2xrv" Nov 24 06:59:44.968614 kubelet[2708]: E1124 06:59:44.968517 2708 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8e414a15ac87e2d0baef44bf8a981ed49b30862b022385b18d327fc31040e73\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589b8dfb96-w2xrv" Nov 24 06:59:44.968678 kubelet[2708]: E1124 06:59:44.968598 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-589b8dfb96-w2xrv_calico-system(9a41cd22-6538-4edb-a2fc-fe53fb988efb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-589b8dfb96-w2xrv_calico-system(9a41cd22-6538-4edb-a2fc-fe53fb988efb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8e414a15ac87e2d0baef44bf8a981ed49b30862b022385b18d327fc31040e73\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589b8dfb96-w2xrv" podUID="9a41cd22-6538-4edb-a2fc-fe53fb988efb" Nov 24 06:59:45.066335 containerd[1532]: time="2025-11-24T06:59:45.066073211Z" level=error msg="Failed to destroy network for sandbox \"5810e043dda1118191e194b33f27ae8657ab63ce562828be2f334f869afe894a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:45.068167 containerd[1532]: time="2025-11-24T06:59:45.067869919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jmszh,Uid:31736d8f-1244-4ceb-aaba-f284117475ca,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5810e043dda1118191e194b33f27ae8657ab63ce562828be2f334f869afe894a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:45.068490 kubelet[2708]: E1124 06:59:45.068286 2708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5810e043dda1118191e194b33f27ae8657ab63ce562828be2f334f869afe894a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:59:45.068490 kubelet[2708]: E1124 06:59:45.068363 2708 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5810e043dda1118191e194b33f27ae8657ab63ce562828be2f334f869afe894a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jmszh" Nov 24 06:59:45.068490 kubelet[2708]: E1124 06:59:45.068389 2708 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5810e043dda1118191e194b33f27ae8657ab63ce562828be2f334f869afe894a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jmszh" Nov 24 06:59:45.069194 kubelet[2708]: E1124 06:59:45.068468 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jmszh_calico-system(31736d8f-1244-4ceb-aaba-f284117475ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jmszh_calico-system(31736d8f-1244-4ceb-aaba-f284117475ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5810e043dda1118191e194b33f27ae8657ab63ce562828be2f334f869afe894a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 06:59:48.600096 kubelet[2708]: I1124 06:59:48.600043 2708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 06:59:48.602154 kubelet[2708]: E1124 06:59:48.600596 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:49.161345 kubelet[2708]: E1124 06:59:49.161300 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:51.599611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1552211922.mount: Deactivated successfully. Nov 24 06:59:51.783739 containerd[1532]: time="2025-11-24T06:59:51.708982076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 24 06:59:51.848738 containerd[1532]: time="2025-11-24T06:59:51.848451240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:51.866170 containerd[1532]: time="2025-11-24T06:59:51.866110289Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:51.881658 containerd[1532]: time="2025-11-24T06:59:51.881346261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:59:51.889010 containerd[1532]: time="2025-11-24T06:59:51.887890912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.70037889s" Nov 24 06:59:51.889010 containerd[1532]: time="2025-11-24T06:59:51.887977363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 24 06:59:51.958189 containerd[1532]: time="2025-11-24T06:59:51.958128773Z" level=info msg="CreateContainer within sandbox \"0d9d8726726ab4848598386a1487d3dfdaf0678425f0718ca374cee26d29cecc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 24 06:59:52.000029 containerd[1532]: time="2025-11-24T06:59:51.999968325Z" level=info msg="Container 97127177688504a05678ef051583de8dfab3f64ccd8b1d608aee03c26c409a36: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:59:52.000658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2278947076.mount: Deactivated successfully. Nov 24 06:59:52.027603 containerd[1532]: time="2025-11-24T06:59:52.027534306Z" level=info msg="CreateContainer within sandbox \"0d9d8726726ab4848598386a1487d3dfdaf0678425f0718ca374cee26d29cecc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"97127177688504a05678ef051583de8dfab3f64ccd8b1d608aee03c26c409a36\"" Nov 24 06:59:52.029900 containerd[1532]: time="2025-11-24T06:59:52.028691762Z" level=info msg="StartContainer for \"97127177688504a05678ef051583de8dfab3f64ccd8b1d608aee03c26c409a36\"" Nov 24 06:59:52.048167 containerd[1532]: time="2025-11-24T06:59:52.048069438Z" level=info msg="connecting to shim 97127177688504a05678ef051583de8dfab3f64ccd8b1d608aee03c26c409a36" address="unix:///run/containerd/s/3fa1fcade70ac385037e084f68a8276dcc50053b35c52c9f07bc375f148774b3" protocol=ttrpc version=3 Nov 24 06:59:52.215169 systemd[1]: Started cri-containerd-97127177688504a05678ef051583de8dfab3f64ccd8b1d608aee03c26c409a36.scope - libcontainer container 97127177688504a05678ef051583de8dfab3f64ccd8b1d608aee03c26c409a36. Nov 24 06:59:52.365525 containerd[1532]: time="2025-11-24T06:59:52.365486705Z" level=info msg="StartContainer for \"97127177688504a05678ef051583de8dfab3f64ccd8b1d608aee03c26c409a36\" returns successfully" Nov 24 06:59:52.494369 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 24 06:59:52.495636 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 24 06:59:52.836654 kubelet[2708]: I1124 06:59:52.836252 2708 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9436af66-5c36-4d0f-a831-5035675bad6c-whisker-ca-bundle\") pod \"9436af66-5c36-4d0f-a831-5035675bad6c\" (UID: \"9436af66-5c36-4d0f-a831-5035675bad6c\") " Nov 24 06:59:52.836654 kubelet[2708]: I1124 06:59:52.836355 2708 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9436af66-5c36-4d0f-a831-5035675bad6c-whisker-backend-key-pair\") pod \"9436af66-5c36-4d0f-a831-5035675bad6c\" (UID: \"9436af66-5c36-4d0f-a831-5035675bad6c\") " Nov 24 06:59:52.836654 kubelet[2708]: I1124 06:59:52.836378 2708 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wmph\" (UniqueName: \"kubernetes.io/projected/9436af66-5c36-4d0f-a831-5035675bad6c-kube-api-access-4wmph\") pod \"9436af66-5c36-4d0f-a831-5035675bad6c\" (UID: \"9436af66-5c36-4d0f-a831-5035675bad6c\") " Nov 24 06:59:52.838329 kubelet[2708]: I1124 06:59:52.837583 2708 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9436af66-5c36-4d0f-a831-5035675bad6c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9436af66-5c36-4d0f-a831-5035675bad6c" (UID: "9436af66-5c36-4d0f-a831-5035675bad6c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 06:59:52.851635 kubelet[2708]: I1124 06:59:52.850146 2708 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9436af66-5c36-4d0f-a831-5035675bad6c-kube-api-access-4wmph" (OuterVolumeSpecName: "kube-api-access-4wmph") pod "9436af66-5c36-4d0f-a831-5035675bad6c" (UID: "9436af66-5c36-4d0f-a831-5035675bad6c"). InnerVolumeSpecName "kube-api-access-4wmph". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 06:59:52.851635 kubelet[2708]: I1124 06:59:52.850563 2708 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9436af66-5c36-4d0f-a831-5035675bad6c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9436af66-5c36-4d0f-a831-5035675bad6c" (UID: "9436af66-5c36-4d0f-a831-5035675bad6c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 06:59:52.850860 systemd[1]: var-lib-kubelet-pods-9436af66\x2d5c36\x2d4d0f\x2da831\x2d5035675bad6c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4wmph.mount: Deactivated successfully. Nov 24 06:59:52.851000 systemd[1]: var-lib-kubelet-pods-9436af66\x2d5c36\x2d4d0f\x2da831\x2d5035675bad6c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 24 06:59:52.937283 kubelet[2708]: I1124 06:59:52.937188 2708 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9436af66-5c36-4d0f-a831-5035675bad6c-whisker-backend-key-pair\") on node \"ci-4459.2.1-c-f92aac29d7\" DevicePath \"\"" Nov 24 06:59:52.937283 kubelet[2708]: I1124 06:59:52.937240 2708 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4wmph\" (UniqueName: \"kubernetes.io/projected/9436af66-5c36-4d0f-a831-5035675bad6c-kube-api-access-4wmph\") on node \"ci-4459.2.1-c-f92aac29d7\" DevicePath \"\"" Nov 24 06:59:52.937283 kubelet[2708]: I1124 06:59:52.937253 2708 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9436af66-5c36-4d0f-a831-5035675bad6c-whisker-ca-bundle\") on node \"ci-4459.2.1-c-f92aac29d7\" DevicePath \"\"" Nov 24 06:59:53.217600 kubelet[2708]: E1124 06:59:53.217275 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:53.228776 systemd[1]: Removed slice kubepods-besteffort-pod9436af66_5c36_4d0f_a831_5035675bad6c.slice - libcontainer container kubepods-besteffort-pod9436af66_5c36_4d0f_a831_5035675bad6c.slice. Nov 24 06:59:53.281673 kubelet[2708]: I1124 06:59:53.281550 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wctnl" podStartSLOduration=2.6594579229999997 podStartE2EDuration="21.280701635s" podCreationTimestamp="2025-11-24 06:59:32 +0000 UTC" firstStartedPulling="2025-11-24 06:59:33.286382026 +0000 UTC m=+23.565329282" lastFinishedPulling="2025-11-24 06:59:51.907625731 +0000 UTC m=+42.186572994" observedRunningTime="2025-11-24 06:59:53.279263981 +0000 UTC m=+43.558211249" watchObservedRunningTime="2025-11-24 06:59:53.280701635 +0000 UTC m=+43.559648901" Nov 24 06:59:53.422951 systemd[1]: Created slice kubepods-besteffort-pod43c71191_e8bd_4deb_bacd_b63f93810870.slice - libcontainer container kubepods-besteffort-pod43c71191_e8bd_4deb_bacd_b63f93810870.slice. Nov 24 06:59:53.542997 kubelet[2708]: I1124 06:59:53.542852 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/43c71191-e8bd-4deb-bacd-b63f93810870-whisker-backend-key-pair\") pod \"whisker-6f49b47ccf-8dgfn\" (UID: \"43c71191-e8bd-4deb-bacd-b63f93810870\") " pod="calico-system/whisker-6f49b47ccf-8dgfn" Nov 24 06:59:53.543392 kubelet[2708]: I1124 06:59:53.543241 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43c71191-e8bd-4deb-bacd-b63f93810870-whisker-ca-bundle\") pod \"whisker-6f49b47ccf-8dgfn\" (UID: \"43c71191-e8bd-4deb-bacd-b63f93810870\") " pod="calico-system/whisker-6f49b47ccf-8dgfn" Nov 24 06:59:53.543607 kubelet[2708]: I1124 06:59:53.543537 2708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrpfp\" (UniqueName: \"kubernetes.io/projected/43c71191-e8bd-4deb-bacd-b63f93810870-kube-api-access-wrpfp\") pod \"whisker-6f49b47ccf-8dgfn\" (UID: \"43c71191-e8bd-4deb-bacd-b63f93810870\") " pod="calico-system/whisker-6f49b47ccf-8dgfn" Nov 24 06:59:53.731026 containerd[1532]: time="2025-11-24T06:59:53.730967133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f49b47ccf-8dgfn,Uid:43c71191-e8bd-4deb-bacd-b63f93810870,Namespace:calico-system,Attempt:0,}" Nov 24 06:59:53.924016 kubelet[2708]: I1124 06:59:53.923952 2708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9436af66-5c36-4d0f-a831-5035675bad6c" path="/var/lib/kubelet/pods/9436af66-5c36-4d0f-a831-5035675bad6c/volumes" Nov 24 06:59:54.120823 systemd-networkd[1416]: cali4ea835e614f: Link UP Nov 24 06:59:54.121117 systemd-networkd[1416]: cali4ea835e614f: Gained carrier Nov 24 06:59:54.173050 containerd[1532]: 2025-11-24 06:59:53.784 [INFO][3831] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 06:59:54.173050 containerd[1532]: 2025-11-24 06:59:53.814 [INFO][3831] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--c--f92aac29d7-k8s-whisker--6f49b47ccf--8dgfn-eth0 whisker-6f49b47ccf- calico-system 43c71191-e8bd-4deb-bacd-b63f93810870 937 0 2025-11-24 06:59:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f49b47ccf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.1-c-f92aac29d7 whisker-6f49b47ccf-8dgfn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4ea835e614f [] [] }} ContainerID="a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" Namespace="calico-system" Pod="whisker-6f49b47ccf-8dgfn" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-whisker--6f49b47ccf--8dgfn-" Nov 24 06:59:54.173050 containerd[1532]: 2025-11-24 06:59:53.815 [INFO][3831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" Namespace="calico-system" Pod="whisker-6f49b47ccf-8dgfn" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-whisker--6f49b47ccf--8dgfn-eth0" Nov 24 06:59:54.173050 containerd[1532]: 2025-11-24 06:59:54.021 [INFO][3842] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" HandleID="k8s-pod-network.a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" Workload="ci--4459.2.1--c--f92aac29d7-k8s-whisker--6f49b47ccf--8dgfn-eth0" Nov 24 06:59:54.173402 containerd[1532]: 2025-11-24 06:59:54.024 [INFO][3842] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" HandleID="k8s-pod-network.a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" Workload="ci--4459.2.1--c--f92aac29d7-k8s-whisker--6f49b47ccf--8dgfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001028f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-c-f92aac29d7", "pod":"whisker-6f49b47ccf-8dgfn", "timestamp":"2025-11-24 06:59:54.021886658 +0000 UTC"}, Hostname:"ci-4459.2.1-c-f92aac29d7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:59:54.173402 containerd[1532]: 2025-11-24 06:59:54.024 [INFO][3842] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:59:54.173402 containerd[1532]: 2025-11-24 06:59:54.024 [INFO][3842] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:59:54.173402 containerd[1532]: 2025-11-24 06:59:54.025 [INFO][3842] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-c-f92aac29d7' Nov 24 06:59:54.173402 containerd[1532]: 2025-11-24 06:59:54.041 [INFO][3842] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:54.173402 containerd[1532]: 2025-11-24 06:59:54.054 [INFO][3842] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:54.173402 containerd[1532]: 2025-11-24 06:59:54.065 [INFO][3842] ipam/ipam.go 511: Trying affinity for 192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:54.173402 containerd[1532]: 2025-11-24 06:59:54.068 [INFO][3842] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:54.173402 containerd[1532]: 2025-11-24 06:59:54.072 [INFO][3842] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:54.173662 containerd[1532]: 2025-11-24 06:59:54.072 [INFO][3842] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:54.173662 containerd[1532]: 2025-11-24 06:59:54.076 [INFO][3842] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6 Nov 24 06:59:54.173662 containerd[1532]: 2025-11-24 06:59:54.084 [INFO][3842] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:54.173662 containerd[1532]: 2025-11-24 06:59:54.095 [INFO][3842] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.193/26] block=192.168.43.192/26 handle="k8s-pod-network.a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:54.173662 containerd[1532]: 2025-11-24 06:59:54.095 [INFO][3842] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.193/26] handle="k8s-pod-network.a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:54.173662 containerd[1532]: 2025-11-24 06:59:54.095 [INFO][3842] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:59:54.173662 containerd[1532]: 2025-11-24 06:59:54.095 [INFO][3842] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.193/26] IPv6=[] ContainerID="a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" HandleID="k8s-pod-network.a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" Workload="ci--4459.2.1--c--f92aac29d7-k8s-whisker--6f49b47ccf--8dgfn-eth0" Nov 24 06:59:54.175027 containerd[1532]: 2025-11-24 06:59:54.100 [INFO][3831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" Namespace="calico-system" Pod="whisker-6f49b47ccf-8dgfn" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-whisker--6f49b47ccf--8dgfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-whisker--6f49b47ccf--8dgfn-eth0", GenerateName:"whisker-6f49b47ccf-", Namespace:"calico-system", SelfLink:"", UID:"43c71191-e8bd-4deb-bacd-b63f93810870", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f49b47ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"", Pod:"whisker-6f49b47ccf-8dgfn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.43.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4ea835e614f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:59:54.175027 containerd[1532]: 2025-11-24 06:59:54.101 [INFO][3831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.193/32] ContainerID="a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" Namespace="calico-system" Pod="whisker-6f49b47ccf-8dgfn" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-whisker--6f49b47ccf--8dgfn-eth0" Nov 24 06:59:54.175159 containerd[1532]: 2025-11-24 06:59:54.101 [INFO][3831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ea835e614f ContainerID="a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" Namespace="calico-system" Pod="whisker-6f49b47ccf-8dgfn" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-whisker--6f49b47ccf--8dgfn-eth0" Nov 24 06:59:54.175159 containerd[1532]: 2025-11-24 06:59:54.120 [INFO][3831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" Namespace="calico-system" Pod="whisker-6f49b47ccf-8dgfn" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-whisker--6f49b47ccf--8dgfn-eth0" Nov 24 06:59:54.175222 containerd[1532]: 2025-11-24 06:59:54.127 [INFO][3831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" Namespace="calico-system" Pod="whisker-6f49b47ccf-8dgfn" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-whisker--6f49b47ccf--8dgfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-whisker--6f49b47ccf--8dgfn-eth0", GenerateName:"whisker-6f49b47ccf-", Namespace:"calico-system", SelfLink:"", UID:"43c71191-e8bd-4deb-bacd-b63f93810870", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f49b47ccf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6", Pod:"whisker-6f49b47ccf-8dgfn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.43.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4ea835e614f", MAC:"3e:c7:e7:bc:ee:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:59:54.175848 containerd[1532]: 2025-11-24 06:59:54.165 [INFO][3831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" Namespace="calico-system" Pod="whisker-6f49b47ccf-8dgfn" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-whisker--6f49b47ccf--8dgfn-eth0" Nov 24 06:59:54.223257 kubelet[2708]: I1124 06:59:54.223211 2708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 06:59:54.224256 kubelet[2708]: E1124 06:59:54.224227 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:54.419137 containerd[1532]: time="2025-11-24T06:59:54.419070404Z" level=info msg="connecting to shim a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6" address="unix:///run/containerd/s/d6252db8cd1ce213dbbbfd2aa14cb17db41d16f22980a825e642df452f110181" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:59:54.499425 systemd[1]: Started cri-containerd-a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6.scope - libcontainer container a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6. Nov 24 06:59:54.676947 containerd[1532]: time="2025-11-24T06:59:54.676898552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f49b47ccf-8dgfn,Uid:43c71191-e8bd-4deb-bacd-b63f93810870,Namespace:calico-system,Attempt:0,} returns sandbox id \"a1740383a41f21f2bfbe994b9c5971ff3f90d2198e10056eff3dc9429cf743b6\"" Nov 24 06:59:54.704557 containerd[1532]: time="2025-11-24T06:59:54.704408149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 06:59:55.105912 containerd[1532]: time="2025-11-24T06:59:55.105838089Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:59:55.106956 containerd[1532]: time="2025-11-24T06:59:55.106899666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 06:59:55.107244 containerd[1532]: time="2025-11-24T06:59:55.107156713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 06:59:55.113743 kubelet[2708]: E1124 06:59:55.113448 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 06:59:55.113743 kubelet[2708]: E1124 06:59:55.113527 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 06:59:55.113743 kubelet[2708]: E1124 06:59:55.113669 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6f49b47ccf-8dgfn_calico-system(43c71191-e8bd-4deb-bacd-b63f93810870): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 06:59:55.115662 containerd[1532]: time="2025-11-24T06:59:55.115610745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 06:59:55.418883 systemd-networkd[1416]: vxlan.calico: Link UP Nov 24 06:59:55.418893 systemd-networkd[1416]: vxlan.calico: Gained carrier Nov 24 06:59:55.429564 containerd[1532]: time="2025-11-24T06:59:55.429258802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:59:55.435867 containerd[1532]: time="2025-11-24T06:59:55.433391211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 06:59:55.435867 containerd[1532]: time="2025-11-24T06:59:55.434815788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 06:59:55.436725 kubelet[2708]: E1124 06:59:55.435708 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 06:59:55.436725 kubelet[2708]: E1124 06:59:55.435791 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 06:59:55.436725 kubelet[2708]: E1124 06:59:55.435894 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6f49b47ccf-8dgfn_calico-system(43c71191-e8bd-4deb-bacd-b63f93810870): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 06:59:55.438810 kubelet[2708]: E1124 06:59:55.435947 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f49b47ccf-8dgfn" podUID="43c71191-e8bd-4deb-bacd-b63f93810870" Nov 24 06:59:56.037314 systemd-networkd[1416]: cali4ea835e614f: Gained IPv6LL Nov 24 06:59:56.232848 kubelet[2708]: E1124 06:59:56.232632 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f49b47ccf-8dgfn" podUID="43c71191-e8bd-4deb-bacd-b63f93810870" Nov 24 06:59:57.188970 systemd-networkd[1416]: vxlan.calico: Gained IPv6LL Nov 24 06:59:57.923759 containerd[1532]: time="2025-11-24T06:59:57.923661418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xx5mj,Uid:65e83e8e-3217-4c5d-9dc8-a1e6cab084a7,Namespace:calico-system,Attempt:0,}" Nov 24 06:59:57.925406 containerd[1532]: time="2025-11-24T06:59:57.925347471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5bccf9c-vgh4k,Uid:688d478f-9397-4780-ac83-825ed42a52b7,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:59:57.927782 kubelet[2708]: E1124 06:59:57.926361 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:57.929336 containerd[1532]: time="2025-11-24T06:59:57.929063039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v4mwq,Uid:f6d6def7-1dbb-4073-936b-e22bda94a97c,Namespace:kube-system,Attempt:0,}" Nov 24 06:59:58.197363 systemd-networkd[1416]: cali1c1663f16b8: Link UP Nov 24 06:59:58.200621 systemd-networkd[1416]: cali1c1663f16b8: Gained carrier Nov 24 06:59:58.235038 containerd[1532]: 2025-11-24 06:59:58.018 [INFO][4115] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--vgh4k-eth0 calico-apiserver-79c5bccf9c- calico-apiserver 688d478f-9397-4780-ac83-825ed42a52b7 860 0 2025-11-24 06:59:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79c5bccf9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-c-f92aac29d7 calico-apiserver-79c5bccf9c-vgh4k eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1c1663f16b8 [] [] }} ContainerID="0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-vgh4k" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--vgh4k-" Nov 24 06:59:58.235038 containerd[1532]: 2025-11-24 06:59:58.018 [INFO][4115] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-vgh4k" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--vgh4k-eth0" Nov 24 06:59:58.235038 containerd[1532]: 2025-11-24 06:59:58.093 [INFO][4131] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" HandleID="k8s-pod-network.0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" Workload="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--vgh4k-eth0" Nov 24 06:59:58.236418 containerd[1532]: 2025-11-24 06:59:58.094 [INFO][4131] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" HandleID="k8s-pod-network.0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" Workload="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--vgh4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f990), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-c-f92aac29d7", "pod":"calico-apiserver-79c5bccf9c-vgh4k", "timestamp":"2025-11-24 06:59:58.093321973 +0000 UTC"}, Hostname:"ci-4459.2.1-c-f92aac29d7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:59:58.236418 containerd[1532]: 2025-11-24 06:59:58.095 [INFO][4131] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:59:58.236418 containerd[1532]: 2025-11-24 06:59:58.095 [INFO][4131] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:59:58.236418 containerd[1532]: 2025-11-24 06:59:58.095 [INFO][4131] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-c-f92aac29d7' Nov 24 06:59:58.236418 containerd[1532]: 2025-11-24 06:59:58.115 [INFO][4131] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.236418 containerd[1532]: 2025-11-24 06:59:58.130 [INFO][4131] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.236418 containerd[1532]: 2025-11-24 06:59:58.145 [INFO][4131] ipam/ipam.go 511: Trying affinity for 192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.236418 containerd[1532]: 2025-11-24 06:59:58.149 [INFO][4131] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.236418 containerd[1532]: 2025-11-24 06:59:58.152 [INFO][4131] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.237161 containerd[1532]: 2025-11-24 06:59:58.153 [INFO][4131] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.237161 containerd[1532]: 2025-11-24 06:59:58.156 [INFO][4131] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4 Nov 24 06:59:58.237161 containerd[1532]: 2025-11-24 06:59:58.166 [INFO][4131] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.237161 containerd[1532]: 2025-11-24 06:59:58.181 [INFO][4131] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.194/26] block=192.168.43.192/26 handle="k8s-pod-network.0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.237161 containerd[1532]: 2025-11-24 06:59:58.181 [INFO][4131] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.194/26] handle="k8s-pod-network.0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.237161 containerd[1532]: 2025-11-24 06:59:58.181 [INFO][4131] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:59:58.237161 containerd[1532]: 2025-11-24 06:59:58.182 [INFO][4131] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.194/26] IPv6=[] ContainerID="0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" HandleID="k8s-pod-network.0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" Workload="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--vgh4k-eth0" Nov 24 06:59:58.238641 containerd[1532]: 2025-11-24 06:59:58.189 [INFO][4115] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-vgh4k" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--vgh4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--vgh4k-eth0", GenerateName:"calico-apiserver-79c5bccf9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"688d478f-9397-4780-ac83-825ed42a52b7", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c5bccf9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"", Pod:"calico-apiserver-79c5bccf9c-vgh4k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c1663f16b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:59:58.239741 containerd[1532]: 2025-11-24 06:59:58.189 [INFO][4115] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.194/32] ContainerID="0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-vgh4k" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--vgh4k-eth0" Nov 24 06:59:58.239741 containerd[1532]: 2025-11-24 06:59:58.190 [INFO][4115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c1663f16b8 ContainerID="0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-vgh4k" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--vgh4k-eth0" Nov 24 06:59:58.239741 containerd[1532]: 2025-11-24 06:59:58.197 [INFO][4115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-vgh4k" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--vgh4k-eth0" Nov 24 06:59:58.239943 containerd[1532]: 2025-11-24 06:59:58.198 [INFO][4115] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-vgh4k" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--vgh4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--vgh4k-eth0", GenerateName:"calico-apiserver-79c5bccf9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"688d478f-9397-4780-ac83-825ed42a52b7", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c5bccf9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4", Pod:"calico-apiserver-79c5bccf9c-vgh4k", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1c1663f16b8", MAC:"32:0b:c5:bd:cf:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:59:58.240090 containerd[1532]: 2025-11-24 06:59:58.225 [INFO][4115] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-vgh4k" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--vgh4k-eth0" Nov 24 06:59:58.312096 containerd[1532]: time="2025-11-24T06:59:58.309695357Z" level=info msg="connecting to shim 0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4" address="unix:///run/containerd/s/9bcc769ad6d1c34023fe694d0007656e263cd382ff9c5ea0cbc89a58ede57d33" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:59:58.364925 systemd-networkd[1416]: cali584e2de9904: Link UP Nov 24 06:59:58.367246 systemd-networkd[1416]: cali584e2de9904: Gained carrier Nov 24 06:59:58.433013 containerd[1532]: 2025-11-24 06:59:58.037 [INFO][4097] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--c--f92aac29d7-k8s-goldmane--7c778bb748--xx5mj-eth0 goldmane-7c778bb748- calico-system 65e83e8e-3217-4c5d-9dc8-a1e6cab084a7 856 0 2025-11-24 06:59:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.1-c-f92aac29d7 goldmane-7c778bb748-xx5mj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali584e2de9904 [] [] }} ContainerID="65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" Namespace="calico-system" Pod="goldmane-7c778bb748-xx5mj" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-goldmane--7c778bb748--xx5mj-" Nov 24 06:59:58.433013 containerd[1532]: 2025-11-24 06:59:58.038 [INFO][4097] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" Namespace="calico-system" Pod="goldmane-7c778bb748-xx5mj" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-goldmane--7c778bb748--xx5mj-eth0" Nov 24 06:59:58.433013 containerd[1532]: 2025-11-24 06:59:58.144 [INFO][4137] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" HandleID="k8s-pod-network.65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" Workload="ci--4459.2.1--c--f92aac29d7-k8s-goldmane--7c778bb748--xx5mj-eth0" Nov 24 06:59:58.433278 containerd[1532]: 2025-11-24 06:59:58.145 [INFO][4137] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" HandleID="k8s-pod-network.65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" Workload="ci--4459.2.1--c--f92aac29d7-k8s-goldmane--7c778bb748--xx5mj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001032e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-c-f92aac29d7", "pod":"goldmane-7c778bb748-xx5mj", "timestamp":"2025-11-24 06:59:58.144705793 +0000 UTC"}, Hostname:"ci-4459.2.1-c-f92aac29d7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:59:58.433278 containerd[1532]: 2025-11-24 06:59:58.145 [INFO][4137] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:59:58.433278 containerd[1532]: 2025-11-24 06:59:58.182 [INFO][4137] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:59:58.433278 containerd[1532]: 2025-11-24 06:59:58.182 [INFO][4137] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-c-f92aac29d7' Nov 24 06:59:58.433278 containerd[1532]: 2025-11-24 06:59:58.217 [INFO][4137] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.433278 containerd[1532]: 2025-11-24 06:59:58.239 [INFO][4137] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.433278 containerd[1532]: 2025-11-24 06:59:58.258 [INFO][4137] ipam/ipam.go 511: Trying affinity for 192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.433278 containerd[1532]: 2025-11-24 06:59:58.263 [INFO][4137] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.433278 containerd[1532]: 2025-11-24 06:59:58.273 [INFO][4137] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.433511 containerd[1532]: 2025-11-24 06:59:58.277 [INFO][4137] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.433511 containerd[1532]: 2025-11-24 06:59:58.282 [INFO][4137] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204 Nov 24 06:59:58.433511 containerd[1532]: 2025-11-24 06:59:58.293 [INFO][4137] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.433511 containerd[1532]: 2025-11-24 06:59:58.317 [INFO][4137] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.195/26] block=192.168.43.192/26 handle="k8s-pod-network.65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.433511 containerd[1532]: 2025-11-24 06:59:58.318 [INFO][4137] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.195/26] handle="k8s-pod-network.65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.433511 containerd[1532]: 2025-11-24 06:59:58.321 [INFO][4137] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:59:58.433511 containerd[1532]: 2025-11-24 06:59:58.324 [INFO][4137] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.195/26] IPv6=[] ContainerID="65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" HandleID="k8s-pod-network.65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" Workload="ci--4459.2.1--c--f92aac29d7-k8s-goldmane--7c778bb748--xx5mj-eth0" Nov 24 06:59:58.434949 containerd[1532]: 2025-11-24 06:59:58.345 [INFO][4097] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" Namespace="calico-system" Pod="goldmane-7c778bb748-xx5mj" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-goldmane--7c778bb748--xx5mj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-goldmane--7c778bb748--xx5mj-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"65e83e8e-3217-4c5d-9dc8-a1e6cab084a7", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"", Pod:"goldmane-7c778bb748-xx5mj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali584e2de9904", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:59:58.435085 containerd[1532]: 2025-11-24 06:59:58.348 [INFO][4097] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.195/32] ContainerID="65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" Namespace="calico-system" Pod="goldmane-7c778bb748-xx5mj" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-goldmane--7c778bb748--xx5mj-eth0" Nov 24 06:59:58.435085 containerd[1532]: 2025-11-24 06:59:58.349 [INFO][4097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali584e2de9904 ContainerID="65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" Namespace="calico-system" Pod="goldmane-7c778bb748-xx5mj" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-goldmane--7c778bb748--xx5mj-eth0" Nov 24 06:59:58.435085 containerd[1532]: 2025-11-24 06:59:58.366 [INFO][4097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" Namespace="calico-system" Pod="goldmane-7c778bb748-xx5mj" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-goldmane--7c778bb748--xx5mj-eth0" Nov 24 06:59:58.435166 containerd[1532]: 2025-11-24 06:59:58.369 [INFO][4097] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" Namespace="calico-system" Pod="goldmane-7c778bb748-xx5mj" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-goldmane--7c778bb748--xx5mj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-goldmane--7c778bb748--xx5mj-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"65e83e8e-3217-4c5d-9dc8-a1e6cab084a7", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204", Pod:"goldmane-7c778bb748-xx5mj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.43.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali584e2de9904", MAC:"f6:3c:c5:49:d0:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:59:58.435226 containerd[1532]: 2025-11-24 06:59:58.414 [INFO][4097] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" Namespace="calico-system" Pod="goldmane-7c778bb748-xx5mj" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-goldmane--7c778bb748--xx5mj-eth0" Nov 24 06:59:58.436294 systemd[1]: Started cri-containerd-0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4.scope - libcontainer container 0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4. Nov 24 06:59:58.483149 containerd[1532]: time="2025-11-24T06:59:58.482895187Z" level=info msg="connecting to shim 65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204" address="unix:///run/containerd/s/bc58495aefcdfe598d00ce73efe3ae1f86b73c150d969fc48183baab9bc77d19" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:59:58.526846 systemd-networkd[1416]: calid47c08facb0: Link UP Nov 24 06:59:58.529515 systemd-networkd[1416]: calid47c08facb0: Gained carrier Nov 24 06:59:58.577813 containerd[1532]: 2025-11-24 06:59:58.059 [INFO][4106] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--v4mwq-eth0 coredns-66bc5c9577- kube-system f6d6def7-1dbb-4073-936b-e22bda94a97c 845 0 2025-11-24 06:59:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.1-c-f92aac29d7 coredns-66bc5c9577-v4mwq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid47c08facb0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" Namespace="kube-system" Pod="coredns-66bc5c9577-v4mwq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--v4mwq-" Nov 24 06:59:58.577813 containerd[1532]: 2025-11-24 06:59:58.059 [INFO][4106] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" Namespace="kube-system" Pod="coredns-66bc5c9577-v4mwq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--v4mwq-eth0" Nov 24 06:59:58.577813 containerd[1532]: 2025-11-24 06:59:58.148 [INFO][4142] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" HandleID="k8s-pod-network.0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" Workload="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--v4mwq-eth0" Nov 24 06:59:58.578117 containerd[1532]: 2025-11-24 06:59:58.148 [INFO][4142] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" HandleID="k8s-pod-network.0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" Workload="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--v4mwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cc130), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.1-c-f92aac29d7", "pod":"coredns-66bc5c9577-v4mwq", "timestamp":"2025-11-24 06:59:58.148109567 +0000 UTC"}, Hostname:"ci-4459.2.1-c-f92aac29d7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:59:58.578117 containerd[1532]: 2025-11-24 06:59:58.148 [INFO][4142] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:59:58.578117 containerd[1532]: 2025-11-24 06:59:58.320 [INFO][4142] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:59:58.578117 containerd[1532]: 2025-11-24 06:59:58.320 [INFO][4142] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-c-f92aac29d7' Nov 24 06:59:58.578117 containerd[1532]: 2025-11-24 06:59:58.344 [INFO][4142] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.578117 containerd[1532]: 2025-11-24 06:59:58.384 [INFO][4142] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.578117 containerd[1532]: 2025-11-24 06:59:58.417 [INFO][4142] ipam/ipam.go 511: Trying affinity for 192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.578117 containerd[1532]: 2025-11-24 06:59:58.445 [INFO][4142] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.578117 containerd[1532]: 2025-11-24 06:59:58.452 [INFO][4142] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.579422 containerd[1532]: 2025-11-24 06:59:58.453 [INFO][4142] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.579422 containerd[1532]: 2025-11-24 06:59:58.461 [INFO][4142] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd Nov 24 06:59:58.579422 containerd[1532]: 2025-11-24 06:59:58.485 [INFO][4142] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.579422 containerd[1532]: 2025-11-24 06:59:58.509 [INFO][4142] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.196/26] block=192.168.43.192/26 handle="k8s-pod-network.0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.579422 containerd[1532]: 2025-11-24 06:59:58.511 [INFO][4142] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.196/26] handle="k8s-pod-network.0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:58.579422 containerd[1532]: 2025-11-24 06:59:58.511 [INFO][4142] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:59:58.579422 containerd[1532]: 2025-11-24 06:59:58.512 [INFO][4142] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.196/26] IPv6=[] ContainerID="0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" HandleID="k8s-pod-network.0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" Workload="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--v4mwq-eth0" Nov 24 06:59:58.579953 containerd[1532]: 2025-11-24 06:59:58.517 [INFO][4106] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" Namespace="kube-system" Pod="coredns-66bc5c9577-v4mwq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--v4mwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--v4mwq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f6d6def7-1dbb-4073-936b-e22bda94a97c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"", Pod:"coredns-66bc5c9577-v4mwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid47c08facb0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:59:58.579953 containerd[1532]: 2025-11-24 06:59:58.517 [INFO][4106] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.196/32] ContainerID="0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" Namespace="kube-system" Pod="coredns-66bc5c9577-v4mwq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--v4mwq-eth0" Nov 24 06:59:58.579953 containerd[1532]: 2025-11-24 06:59:58.517 [INFO][4106] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid47c08facb0 ContainerID="0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" Namespace="kube-system" Pod="coredns-66bc5c9577-v4mwq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--v4mwq-eth0" Nov 24 06:59:58.579953 containerd[1532]: 2025-11-24 06:59:58.532 [INFO][4106] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" Namespace="kube-system" Pod="coredns-66bc5c9577-v4mwq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--v4mwq-eth0" Nov 24 06:59:58.579953 containerd[1532]: 2025-11-24 06:59:58.534 [INFO][4106] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" Namespace="kube-system" Pod="coredns-66bc5c9577-v4mwq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--v4mwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--v4mwq-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f6d6def7-1dbb-4073-936b-e22bda94a97c", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd", Pod:"coredns-66bc5c9577-v4mwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid47c08facb0", MAC:"ea:b9:de:fb:bf:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:59:58.581199 containerd[1532]: 2025-11-24 06:59:58.570 [INFO][4106] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" Namespace="kube-system" Pod="coredns-66bc5c9577-v4mwq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--v4mwq-eth0" Nov 24 06:59:58.586956 systemd[1]: Started cri-containerd-65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204.scope - libcontainer container 65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204. Nov 24 06:59:58.635422 containerd[1532]: time="2025-11-24T06:59:58.635342364Z" level=info msg="connecting to shim 0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd" address="unix:///run/containerd/s/b4f6a085ab86e4ddd97e8259cb9cde9cf16f2d3b8b6514723851ceaebbd48e29" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:59:58.683031 systemd[1]: Started cri-containerd-0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd.scope - libcontainer container 0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd. Nov 24 06:59:58.765230 containerd[1532]: time="2025-11-24T06:59:58.764925411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-v4mwq,Uid:f6d6def7-1dbb-4073-936b-e22bda94a97c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd\"" Nov 24 06:59:58.767516 kubelet[2708]: E1124 06:59:58.767352 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:58.775089 containerd[1532]: time="2025-11-24T06:59:58.775017016Z" level=info msg="CreateContainer within sandbox \"0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 06:59:58.790683 containerd[1532]: time="2025-11-24T06:59:58.789872697Z" level=info msg="Container b5ad4c1160fb0b3451455060f4e29f4085d7a62af35fda0e20dcc893f87e094e: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:59:58.813746 containerd[1532]: time="2025-11-24T06:59:58.813509258Z" level=info msg="CreateContainer within sandbox \"0d636e9ebd18ec52c75e8d41f293072e5a54ee766da4a006a600037d0200d1dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5ad4c1160fb0b3451455060f4e29f4085d7a62af35fda0e20dcc893f87e094e\"" Nov 24 06:59:58.816845 containerd[1532]: time="2025-11-24T06:59:58.816593881Z" level=info msg="StartContainer for \"b5ad4c1160fb0b3451455060f4e29f4085d7a62af35fda0e20dcc893f87e094e\"" Nov 24 06:59:58.822661 containerd[1532]: time="2025-11-24T06:59:58.821668981Z" level=info msg="connecting to shim b5ad4c1160fb0b3451455060f4e29f4085d7a62af35fda0e20dcc893f87e094e" address="unix:///run/containerd/s/b4f6a085ab86e4ddd97e8259cb9cde9cf16f2d3b8b6514723851ceaebbd48e29" protocol=ttrpc version=3 Nov 24 06:59:58.888629 containerd[1532]: time="2025-11-24T06:59:58.888579619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5bccf9c-vgh4k,Uid:688d478f-9397-4780-ac83-825ed42a52b7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0b569686d6a0dbd7aeee5a8c9aec8b8754c8a0d78b1339cf37758dd8ab42afa4\"" Nov 24 06:59:58.893283 containerd[1532]: time="2025-11-24T06:59:58.892923690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:59:58.916991 systemd[1]: Started cri-containerd-b5ad4c1160fb0b3451455060f4e29f4085d7a62af35fda0e20dcc893f87e094e.scope - libcontainer container b5ad4c1160fb0b3451455060f4e29f4085d7a62af35fda0e20dcc893f87e094e. Nov 24 06:59:58.924772 containerd[1532]: time="2025-11-24T06:59:58.924470450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jmszh,Uid:31736d8f-1244-4ceb-aaba-f284117475ca,Namespace:calico-system,Attempt:0,}" Nov 24 06:59:58.932386 containerd[1532]: time="2025-11-24T06:59:58.932335511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c87b844f6-pjvfq,Uid:e6c5825d-6ef5-4e3c-a5de-981230cfd835,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:59:58.993514 containerd[1532]: time="2025-11-24T06:59:58.992530782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xx5mj,Uid:65e83e8e-3217-4c5d-9dc8-a1e6cab084a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"65de94cc021323bb2de340d803b85ae6f21f9f2fd39ba53a1c63f1ee535bd204\"" Nov 24 06:59:59.069941 containerd[1532]: time="2025-11-24T06:59:59.069589556Z" level=info msg="StartContainer for \"b5ad4c1160fb0b3451455060f4e29f4085d7a62af35fda0e20dcc893f87e094e\" returns successfully" Nov 24 06:59:59.222604 containerd[1532]: time="2025-11-24T06:59:59.222539751Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:59:59.224740 containerd[1532]: time="2025-11-24T06:59:59.224604029Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:59:59.225587 containerd[1532]: time="2025-11-24T06:59:59.224788440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:59:59.230405 kubelet[2708]: E1124 06:59:59.225803 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:59:59.230405 kubelet[2708]: E1124 06:59:59.230154 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:59:59.231189 kubelet[2708]: E1124 06:59:59.230370 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79c5bccf9c-vgh4k_calico-apiserver(688d478f-9397-4780-ac83-825ed42a52b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:59:59.231189 kubelet[2708]: E1124 06:59:59.230487 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-vgh4k" podUID="688d478f-9397-4780-ac83-825ed42a52b7" Nov 24 06:59:59.231844 containerd[1532]: time="2025-11-24T06:59:59.231796136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 06:59:59.262684 kubelet[2708]: E1124 06:59:59.262649 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:59.270968 kubelet[2708]: E1124 06:59:59.270804 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-vgh4k" podUID="688d478f-9397-4780-ac83-825ed42a52b7" Nov 24 06:59:59.294310 kubelet[2708]: I1124 06:59:59.294129 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-v4mwq" podStartSLOduration=44.294111375 podStartE2EDuration="44.294111375s" podCreationTimestamp="2025-11-24 06:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:59:59.293385412 +0000 UTC m=+49.572332677" watchObservedRunningTime="2025-11-24 06:59:59.294111375 +0000 UTC m=+49.573058654" Nov 24 06:59:59.323783 systemd-networkd[1416]: cali31332c0bc2c: Link UP Nov 24 06:59:59.324030 systemd-networkd[1416]: cali31332c0bc2c: Gained carrier Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.114 [INFO][4341] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--c--f92aac29d7-k8s-csi--node--driver--jmszh-eth0 csi-node-driver- calico-system 31736d8f-1244-4ceb-aaba-f284117475ca 736 0 2025-11-24 06:59:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.1-c-f92aac29d7 csi-node-driver-jmszh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali31332c0bc2c [] [] }} ContainerID="b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" Namespace="calico-system" Pod="csi-node-driver-jmszh" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-csi--node--driver--jmszh-" Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.114 [INFO][4341] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" Namespace="calico-system" Pod="csi-node-driver-jmszh" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-csi--node--driver--jmszh-eth0" Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.200 [INFO][4380] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" HandleID="k8s-pod-network.b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" Workload="ci--4459.2.1--c--f92aac29d7-k8s-csi--node--driver--jmszh-eth0" Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.201 [INFO][4380] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" HandleID="k8s-pod-network.b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" Workload="ci--4459.2.1--c--f92aac29d7-k8s-csi--node--driver--jmszh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-c-f92aac29d7", "pod":"csi-node-driver-jmszh", "timestamp":"2025-11-24 06:59:59.200887702 +0000 UTC"}, Hostname:"ci-4459.2.1-c-f92aac29d7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.201 [INFO][4380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.201 [INFO][4380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.201 [INFO][4380] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-c-f92aac29d7' Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.214 [INFO][4380] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.225 [INFO][4380] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.242 [INFO][4380] ipam/ipam.go 511: Trying affinity for 192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.250 [INFO][4380] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.259 [INFO][4380] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.260 [INFO][4380] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.265 [INFO][4380] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.279 [INFO][4380] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.304 [INFO][4380] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.197/26] block=192.168.43.192/26 handle="k8s-pod-network.b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.305 [INFO][4380] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.197/26] handle="k8s-pod-network.b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.305 [INFO][4380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:59:59.389171 containerd[1532]: 2025-11-24 06:59:59.305 [INFO][4380] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.197/26] IPv6=[] ContainerID="b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" HandleID="k8s-pod-network.b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" Workload="ci--4459.2.1--c--f92aac29d7-k8s-csi--node--driver--jmszh-eth0" Nov 24 06:59:59.391537 containerd[1532]: 2025-11-24 06:59:59.313 [INFO][4341] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" Namespace="calico-system" Pod="csi-node-driver-jmszh" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-csi--node--driver--jmszh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-csi--node--driver--jmszh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31736d8f-1244-4ceb-aaba-f284117475ca", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"", Pod:"csi-node-driver-jmszh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31332c0bc2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:59:59.391537 containerd[1532]: 2025-11-24 06:59:59.313 [INFO][4341] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.197/32] ContainerID="b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" Namespace="calico-system" Pod="csi-node-driver-jmszh" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-csi--node--driver--jmszh-eth0" Nov 24 06:59:59.391537 containerd[1532]: 2025-11-24 06:59:59.314 [INFO][4341] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31332c0bc2c ContainerID="b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" Namespace="calico-system" Pod="csi-node-driver-jmszh" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-csi--node--driver--jmszh-eth0" Nov 24 06:59:59.391537 containerd[1532]: 2025-11-24 06:59:59.325 [INFO][4341] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" Namespace="calico-system" Pod="csi-node-driver-jmszh" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-csi--node--driver--jmszh-eth0" Nov 24 06:59:59.391537 containerd[1532]: 2025-11-24 06:59:59.328 [INFO][4341] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" Namespace="calico-system" Pod="csi-node-driver-jmszh" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-csi--node--driver--jmszh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-csi--node--driver--jmszh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31736d8f-1244-4ceb-aaba-f284117475ca", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa", Pod:"csi-node-driver-jmszh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali31332c0bc2c", MAC:"ca:ee:ef:2f:03:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:59:59.391537 containerd[1532]: 2025-11-24 06:59:59.384 [INFO][4341] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" Namespace="calico-system" Pod="csi-node-driver-jmszh" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-csi--node--driver--jmszh-eth0" Nov 24 06:59:59.449774 containerd[1532]: time="2025-11-24T06:59:59.448459508Z" level=info msg="connecting to shim b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa" address="unix:///run/containerd/s/082b27d4397ba8c0820cc3af2974581accea773b475857f1119f94bd717d78cd" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:59:59.500994 systemd[1]: Started cri-containerd-b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa.scope - libcontainer container b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa. Nov 24 06:59:59.523833 systemd-networkd[1416]: calia27594aad04: Link UP Nov 24 06:59:59.524855 systemd-networkd[1416]: calia27594aad04: Gained carrier Nov 24 06:59:59.536234 containerd[1532]: time="2025-11-24T06:59:59.535966345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:59:59.554037 containerd[1532]: time="2025-11-24T06:59:59.553990166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 06:59:59.554439 kubelet[2708]: E1124 06:59:59.554402 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 06:59:59.554580 kubelet[2708]: E1124 06:59:59.554563 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 06:59:59.554676 containerd[1532]: time="2025-11-24T06:59:59.554055994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 06:59:59.554850 kubelet[2708]: E1124 06:59:59.554830 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xx5mj_calico-system(65e83e8e-3217-4c5d-9dc8-a1e6cab084a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 06:59:59.555222 kubelet[2708]: E1124 06:59:59.555195 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xx5mj" podUID="65e83e8e-3217-4c5d-9dc8-a1e6cab084a7" Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.097 [INFO][4357] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--c87b844f6--pjvfq-eth0 calico-apiserver-c87b844f6- calico-apiserver e6c5825d-6ef5-4e3c-a5de-981230cfd835 850 0 2025-11-24 06:59:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c87b844f6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-c-f92aac29d7 calico-apiserver-c87b844f6-pjvfq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia27594aad04 [] [] }} ContainerID="4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" Namespace="calico-apiserver" Pod="calico-apiserver-c87b844f6-pjvfq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--c87b844f6--pjvfq-" Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.101 [INFO][4357] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" Namespace="calico-apiserver" Pod="calico-apiserver-c87b844f6-pjvfq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--c87b844f6--pjvfq-eth0" Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.208 [INFO][4377] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" HandleID="k8s-pod-network.4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" Workload="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--c87b844f6--pjvfq-eth0" Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.211 [INFO][4377] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" HandleID="k8s-pod-network.4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" Workload="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--c87b844f6--pjvfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-c-f92aac29d7", "pod":"calico-apiserver-c87b844f6-pjvfq", "timestamp":"2025-11-24 06:59:59.208378142 +0000 UTC"}, Hostname:"ci-4459.2.1-c-f92aac29d7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.211 [INFO][4377] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.305 [INFO][4377] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.305 [INFO][4377] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-c-f92aac29d7' Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.365 [INFO][4377] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.395 [INFO][4377] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.412 [INFO][4377] ipam/ipam.go 511: Trying affinity for 192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.427 [INFO][4377] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.438 [INFO][4377] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.439 [INFO][4377] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.447 [INFO][4377] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6 Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.473 [INFO][4377] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.498 [INFO][4377] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.198/26] block=192.168.43.192/26 handle="k8s-pod-network.4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.498 [INFO][4377] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.198/26] handle="k8s-pod-network.4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" host="ci-4459.2.1-c-f92aac29d7" Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.499 [INFO][4377] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:59:59.557520 containerd[1532]: 2025-11-24 06:59:59.499 [INFO][4377] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.198/26] IPv6=[] ContainerID="4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" HandleID="k8s-pod-network.4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" Workload="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--c87b844f6--pjvfq-eth0" Nov 24 06:59:59.558626 containerd[1532]: 2025-11-24 06:59:59.508 [INFO][4357] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" Namespace="calico-apiserver" Pod="calico-apiserver-c87b844f6-pjvfq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--c87b844f6--pjvfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--c87b844f6--pjvfq-eth0", GenerateName:"calico-apiserver-c87b844f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6c5825d-6ef5-4e3c-a5de-981230cfd835", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c87b844f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"", Pod:"calico-apiserver-c87b844f6-pjvfq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia27594aad04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:59:59.558626 containerd[1532]: 2025-11-24 06:59:59.512 [INFO][4357] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.198/32] ContainerID="4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" Namespace="calico-apiserver" Pod="calico-apiserver-c87b844f6-pjvfq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--c87b844f6--pjvfq-eth0" Nov 24 06:59:59.558626 containerd[1532]: 2025-11-24 06:59:59.512 [INFO][4357] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia27594aad04 ContainerID="4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" Namespace="calico-apiserver" Pod="calico-apiserver-c87b844f6-pjvfq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--c87b844f6--pjvfq-eth0" Nov 24 06:59:59.558626 containerd[1532]: 2025-11-24 06:59:59.519 [INFO][4357] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" Namespace="calico-apiserver" Pod="calico-apiserver-c87b844f6-pjvfq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--c87b844f6--pjvfq-eth0" Nov 24 06:59:59.558626 containerd[1532]: 2025-11-24 06:59:59.523 [INFO][4357] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" Namespace="calico-apiserver" Pod="calico-apiserver-c87b844f6-pjvfq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--c87b844f6--pjvfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--c87b844f6--pjvfq-eth0", GenerateName:"calico-apiserver-c87b844f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6c5825d-6ef5-4e3c-a5de-981230cfd835", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c87b844f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6", Pod:"calico-apiserver-c87b844f6-pjvfq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia27594aad04", MAC:"e2:33:70:40:27:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:59:59.558626 containerd[1532]: 2025-11-24 06:59:59.553 [INFO][4357] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" Namespace="calico-apiserver" Pod="calico-apiserver-c87b844f6-pjvfq" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--c87b844f6--pjvfq-eth0" Nov 24 06:59:59.594047 containerd[1532]: time="2025-11-24T06:59:59.593816436Z" level=info msg="connecting to shim 4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6" address="unix:///run/containerd/s/131fdbd8ec0cc6002e1ab5c8df978e4f88d18b7909f2da00845d68d572368d8b" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:59:59.654375 containerd[1532]: time="2025-11-24T06:59:59.654241637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jmszh,Uid:31736d8f-1244-4ceb-aaba-f284117475ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"b96ee9556c0d4ced5ed73f5175b29f4641363a0c12bb7e38a7687af5cfa8a0aa\"" Nov 24 06:59:59.664103 containerd[1532]: time="2025-11-24T06:59:59.663667502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 06:59:59.668076 systemd[1]: Started cri-containerd-4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6.scope - libcontainer container 4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6. Nov 24 06:59:59.750806 systemd-networkd[1416]: cali584e2de9904: Gained IPv6LL Nov 24 06:59:59.791660 containerd[1532]: time="2025-11-24T06:59:59.791336157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c87b844f6-pjvfq,Uid:e6c5825d-6ef5-4e3c-a5de-981230cfd835,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4b42f27c516081ca22fd0a815b91b619f7043ff2bd022f4e49c9a32b449ab0c6\"" Nov 24 06:59:59.877049 systemd-networkd[1416]: cali1c1663f16b8: Gained IPv6LL Nov 24 06:59:59.922798 kubelet[2708]: E1124 06:59:59.922680 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 06:59:59.924669 containerd[1532]: time="2025-11-24T06:59:59.924614814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589b8dfb96-w2xrv,Uid:9a41cd22-6538-4edb-a2fc-fe53fb988efb,Namespace:calico-system,Attempt:0,}" Nov 24 06:59:59.928589 containerd[1532]: time="2025-11-24T06:59:59.927746223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kv44w,Uid:70e145da-424d-4d20-b7bf-e0cf67bf5a55,Namespace:kube-system,Attempt:0,}" Nov 24 06:59:59.928589 containerd[1532]: time="2025-11-24T06:59:59.928178807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5bccf9c-g8247,Uid:7e8ad9f8-a5c0-4992-b252-00ca50c053ae,Namespace:calico-apiserver,Attempt:0,}" Nov 24 07:00:00.034749 containerd[1532]: time="2025-11-24T07:00:00.033345696Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:00.037893 containerd[1532]: time="2025-11-24T07:00:00.037546977Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 07:00:00.038696 containerd[1532]: time="2025-11-24T07:00:00.038630597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 07:00:00.042118 kubelet[2708]: E1124 07:00:00.041979 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 07:00:00.042118 kubelet[2708]: E1124 07:00:00.042062 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 07:00:00.042412 kubelet[2708]: E1124 07:00:00.042323 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-jmszh_calico-system(31736d8f-1244-4ceb-aaba-f284117475ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:00.046741 containerd[1532]: time="2025-11-24T07:00:00.046545300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:00:00.300392 kubelet[2708]: E1124 07:00:00.298876 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:00.307789 kubelet[2708]: E1124 07:00:00.306980 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-vgh4k" podUID="688d478f-9397-4780-ac83-825ed42a52b7" Nov 24 07:00:00.308174 kubelet[2708]: E1124 07:00:00.307936 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xx5mj" podUID="65e83e8e-3217-4c5d-9dc8-a1e6cab084a7" Nov 24 07:00:00.389951 systemd-networkd[1416]: calid47c08facb0: Gained IPv6LL Nov 24 07:00:00.409749 containerd[1532]: time="2025-11-24T07:00:00.409632932Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:00.426346 containerd[1532]: time="2025-11-24T07:00:00.425865902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:00:00.427292 containerd[1532]: time="2025-11-24T07:00:00.426665682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:00:00.428200 kubelet[2708]: E1124 07:00:00.428071 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:00.428824 kubelet[2708]: E1124 07:00:00.428445 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:00.430244 containerd[1532]: time="2025-11-24T07:00:00.429984669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 07:00:00.430369 kubelet[2708]: E1124 07:00:00.430094 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c87b844f6-pjvfq_calico-apiserver(e6c5825d-6ef5-4e3c-a5de-981230cfd835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:00.430369 kubelet[2708]: E1124 07:00:00.430164 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c87b844f6-pjvfq" podUID="e6c5825d-6ef5-4e3c-a5de-981230cfd835" Nov 24 07:00:00.575268 systemd-networkd[1416]: cali9467b145a8d: Link UP Nov 24 07:00:00.577683 systemd-networkd[1416]: cali9467b145a8d: Gained carrier Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.126 [INFO][4507] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--kv44w-eth0 coredns-66bc5c9577- kube-system 70e145da-424d-4d20-b7bf-e0cf67bf5a55 857 0 2025-11-24 06:59:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.1-c-f92aac29d7 coredns-66bc5c9577-kv44w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9467b145a8d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" Namespace="kube-system" Pod="coredns-66bc5c9577-kv44w" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--kv44w-" Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.128 [INFO][4507] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" Namespace="kube-system" Pod="coredns-66bc5c9577-kv44w" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--kv44w-eth0" Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.307 [INFO][4544] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" HandleID="k8s-pod-network.509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" Workload="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--kv44w-eth0" Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.307 [INFO][4544] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" HandleID="k8s-pod-network.509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" Workload="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--kv44w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e960), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.1-c-f92aac29d7", "pod":"coredns-66bc5c9577-kv44w", "timestamp":"2025-11-24 07:00:00.307112805 +0000 UTC"}, Hostname:"ci-4459.2.1-c-f92aac29d7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.307 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.307 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.309 [INFO][4544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-c-f92aac29d7' Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.345 [INFO][4544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.400 [INFO][4544] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.486 [INFO][4544] ipam/ipam.go 511: Trying affinity for 192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.500 [INFO][4544] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.512 [INFO][4544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.512 [INFO][4544] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.517 [INFO][4544] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.534 [INFO][4544] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.560 [INFO][4544] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.199/26] block=192.168.43.192/26 handle="k8s-pod-network.509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.560 [INFO][4544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.199/26] handle="k8s-pod-network.509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.560 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 07:00:00.616010 containerd[1532]: 2025-11-24 07:00:00.561 [INFO][4544] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.199/26] IPv6=[] ContainerID="509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" HandleID="k8s-pod-network.509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" Workload="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--kv44w-eth0" Nov 24 07:00:00.618198 containerd[1532]: 2025-11-24 07:00:00.566 [INFO][4507] cni-plugin/k8s.go 418: Populated endpoint ContainerID="509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" Namespace="kube-system" Pod="coredns-66bc5c9577-kv44w" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--kv44w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--kv44w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"70e145da-424d-4d20-b7bf-e0cf67bf5a55", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"", Pod:"coredns-66bc5c9577-kv44w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9467b145a8d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:00:00.618198 containerd[1532]: 2025-11-24 07:00:00.567 [INFO][4507] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.199/32] ContainerID="509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" Namespace="kube-system" Pod="coredns-66bc5c9577-kv44w" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--kv44w-eth0" Nov 24 07:00:00.618198 containerd[1532]: 2025-11-24 07:00:00.567 [INFO][4507] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9467b145a8d ContainerID="509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" Namespace="kube-system" Pod="coredns-66bc5c9577-kv44w" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--kv44w-eth0" Nov 24 07:00:00.618198 containerd[1532]: 2025-11-24 07:00:00.580 [INFO][4507] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" Namespace="kube-system" Pod="coredns-66bc5c9577-kv44w" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--kv44w-eth0" Nov 24 07:00:00.618198 containerd[1532]: 2025-11-24 07:00:00.581 [INFO][4507] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" Namespace="kube-system" Pod="coredns-66bc5c9577-kv44w" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--kv44w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--kv44w-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"70e145da-424d-4d20-b7bf-e0cf67bf5a55", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a", Pod:"coredns-66bc5c9577-kv44w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9467b145a8d", MAC:"96:22:cc:90:fb:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:00:00.618551 containerd[1532]: 2025-11-24 07:00:00.610 [INFO][4507] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" Namespace="kube-system" Pod="coredns-66bc5c9577-kv44w" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-coredns--66bc5c9577--kv44w-eth0" Nov 24 07:00:00.638331 kubelet[2708]: I1124 07:00:00.638280 2708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 07:00:00.641171 kubelet[2708]: E1124 07:00:00.641069 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:00.701561 containerd[1532]: time="2025-11-24T07:00:00.701484001Z" level=info msg="connecting to shim 509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a" address="unix:///run/containerd/s/8714fad121de2c27e3c4836977d9700ac53ca517df151ea8ffa8edc5bc1adb52" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:00:00.743762 systemd-networkd[1416]: calibb4c8d5ad37: Link UP Nov 24 07:00:00.752245 systemd-networkd[1416]: calibb4c8d5ad37: Gained carrier Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.177 [INFO][4532] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--c--f92aac29d7-k8s-calico--kube--controllers--589b8dfb96--w2xrv-eth0 calico-kube-controllers-589b8dfb96- calico-system 9a41cd22-6538-4edb-a2fc-fe53fb988efb 859 0 2025-11-24 06:59:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:589b8dfb96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.1-c-f92aac29d7 calico-kube-controllers-589b8dfb96-w2xrv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibb4c8d5ad37 [] [] }} ContainerID="a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" Namespace="calico-system" Pod="calico-kube-controllers-589b8dfb96-w2xrv" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--kube--controllers--589b8dfb96--w2xrv-" Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.177 [INFO][4532] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" Namespace="calico-system" Pod="calico-kube-controllers-589b8dfb96-w2xrv" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--kube--controllers--589b8dfb96--w2xrv-eth0" Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.417 [INFO][4554] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" HandleID="k8s-pod-network.a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" Workload="ci--4459.2.1--c--f92aac29d7-k8s-calico--kube--controllers--589b8dfb96--w2xrv-eth0" Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.417 [INFO][4554] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" HandleID="k8s-pod-network.a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" Workload="ci--4459.2.1--c--f92aac29d7-k8s-calico--kube--controllers--589b8dfb96--w2xrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c8970), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-c-f92aac29d7", "pod":"calico-kube-controllers-589b8dfb96-w2xrv", "timestamp":"2025-11-24 07:00:00.417043373 +0000 UTC"}, Hostname:"ci-4459.2.1-c-f92aac29d7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.418 [INFO][4554] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.561 [INFO][4554] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.561 [INFO][4554] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-c-f92aac29d7' Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.587 [INFO][4554] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.612 [INFO][4554] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.626 [INFO][4554] ipam/ipam.go 511: Trying affinity for 192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.634 [INFO][4554] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.641 [INFO][4554] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.642 [INFO][4554] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.646 [INFO][4554] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295 Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.664 [INFO][4554] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.696 [INFO][4554] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.200/26] block=192.168.43.192/26 handle="k8s-pod-network.a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.697 [INFO][4554] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.200/26] handle="k8s-pod-network.a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.699 [INFO][4554] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 07:00:00.835588 containerd[1532]: 2025-11-24 07:00:00.699 [INFO][4554] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.200/26] IPv6=[] ContainerID="a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" HandleID="k8s-pod-network.a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" Workload="ci--4459.2.1--c--f92aac29d7-k8s-calico--kube--controllers--589b8dfb96--w2xrv-eth0" Nov 24 07:00:00.840380 containerd[1532]: 2025-11-24 07:00:00.716 [INFO][4532] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" Namespace="calico-system" Pod="calico-kube-controllers-589b8dfb96-w2xrv" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--kube--controllers--589b8dfb96--w2xrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-calico--kube--controllers--589b8dfb96--w2xrv-eth0", GenerateName:"calico-kube-controllers-589b8dfb96-", Namespace:"calico-system", SelfLink:"", UID:"9a41cd22-6538-4edb-a2fc-fe53fb988efb", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589b8dfb96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"", Pod:"calico-kube-controllers-589b8dfb96-w2xrv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb4c8d5ad37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:00:00.840380 containerd[1532]: 2025-11-24 07:00:00.718 [INFO][4532] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.200/32] ContainerID="a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" Namespace="calico-system" Pod="calico-kube-controllers-589b8dfb96-w2xrv" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--kube--controllers--589b8dfb96--w2xrv-eth0" Nov 24 07:00:00.840380 containerd[1532]: 2025-11-24 07:00:00.719 [INFO][4532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb4c8d5ad37 ContainerID="a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" Namespace="calico-system" Pod="calico-kube-controllers-589b8dfb96-w2xrv" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--kube--controllers--589b8dfb96--w2xrv-eth0" Nov 24 07:00:00.840380 containerd[1532]: 2025-11-24 07:00:00.768 [INFO][4532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" Namespace="calico-system" Pod="calico-kube-controllers-589b8dfb96-w2xrv" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--kube--controllers--589b8dfb96--w2xrv-eth0" Nov 24 07:00:00.840380 containerd[1532]: 2025-11-24 07:00:00.777 [INFO][4532] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" Namespace="calico-system" Pod="calico-kube-controllers-589b8dfb96-w2xrv" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--kube--controllers--589b8dfb96--w2xrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-calico--kube--controllers--589b8dfb96--w2xrv-eth0", GenerateName:"calico-kube-controllers-589b8dfb96-", Namespace:"calico-system", SelfLink:"", UID:"9a41cd22-6538-4edb-a2fc-fe53fb988efb", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589b8dfb96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295", Pod:"calico-kube-controllers-589b8dfb96-w2xrv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb4c8d5ad37", MAC:"ce:3b:55:30:05:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:00:00.840380 containerd[1532]: 2025-11-24 07:00:00.820 [INFO][4532] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" Namespace="calico-system" Pod="calico-kube-controllers-589b8dfb96-w2xrv" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--kube--controllers--589b8dfb96--w2xrv-eth0" Nov 24 07:00:00.840542 systemd-networkd[1416]: calia27594aad04: Gained IPv6LL Nov 24 07:00:00.880172 containerd[1532]: time="2025-11-24T07:00:00.880054141Z" level=info msg="connecting to shim a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295" address="unix:///run/containerd/s/4637ae86cf38e392f6d4c64712a52e58c2a2d56f432eb05d4d4f0c212b52687c" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:00:00.928038 systemd[1]: Started cri-containerd-509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a.scope - libcontainer container 509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a. Nov 24 07:00:01.006283 systemd[1]: Started cri-containerd-a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295.scope - libcontainer container a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295. Nov 24 07:00:01.022080 systemd-networkd[1416]: calid3a77c0b79a: Link UP Nov 24 07:00:01.028837 systemd-networkd[1416]: calid3a77c0b79a: Gained carrier Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.173 [INFO][4508] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--g8247-eth0 calico-apiserver-79c5bccf9c- calico-apiserver 7e8ad9f8-a5c0-4992-b252-00ca50c053ae 858 0 2025-11-24 06:59:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79c5bccf9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-c-f92aac29d7 calico-apiserver-79c5bccf9c-g8247 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid3a77c0b79a [] [] }} ContainerID="ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-g8247" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--g8247-" Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.173 [INFO][4508] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-g8247" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--g8247-eth0" Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.432 [INFO][4549] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" HandleID="k8s-pod-network.ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" Workload="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--g8247-eth0" Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.436 [INFO][4549] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" HandleID="k8s-pod-network.ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" Workload="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--g8247-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a9bc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-c-f92aac29d7", "pod":"calico-apiserver-79c5bccf9c-g8247", "timestamp":"2025-11-24 07:00:00.432287838 +0000 UTC"}, Hostname:"ci-4459.2.1-c-f92aac29d7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.437 [INFO][4549] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.700 [INFO][4549] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.700 [INFO][4549] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-c-f92aac29d7' Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.736 [INFO][4549] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.807 [INFO][4549] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.853 [INFO][4549] ipam/ipam.go 511: Trying affinity for 192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.875 [INFO][4549] ipam/ipam.go 158: Attempting to load block cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.885 [INFO][4549] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.885 [INFO][4549] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.894 [INFO][4549] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02 Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.913 [INFO][4549] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.960 [INFO][4549] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.43.201/26] block=192.168.43.192/26 handle="k8s-pod-network.ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.963 [INFO][4549] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.43.201/26] handle="k8s-pod-network.ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" host="ci-4459.2.1-c-f92aac29d7" Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.963 [INFO][4549] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 07:00:01.081361 containerd[1532]: 2025-11-24 07:00:00.964 [INFO][4549] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.43.201/26] IPv6=[] ContainerID="ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" HandleID="k8s-pod-network.ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" Workload="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--g8247-eth0" Nov 24 07:00:01.085930 containerd[1532]: 2025-11-24 07:00:00.988 [INFO][4508] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-g8247" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--g8247-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--g8247-eth0", GenerateName:"calico-apiserver-79c5bccf9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e8ad9f8-a5c0-4992-b252-00ca50c053ae", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c5bccf9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"", Pod:"calico-apiserver-79c5bccf9c-g8247", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid3a77c0b79a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:00:01.085930 containerd[1532]: 2025-11-24 07:00:00.999 [INFO][4508] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.43.201/32] ContainerID="ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-g8247" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--g8247-eth0" Nov 24 07:00:01.085930 containerd[1532]: 2025-11-24 07:00:00.999 [INFO][4508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3a77c0b79a ContainerID="ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-g8247" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--g8247-eth0" Nov 24 07:00:01.085930 containerd[1532]: 2025-11-24 07:00:01.028 [INFO][4508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-g8247" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--g8247-eth0" Nov 24 07:00:01.085930 containerd[1532]: 2025-11-24 07:00:01.031 [INFO][4508] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-g8247" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--g8247-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--g8247-eth0", GenerateName:"calico-apiserver-79c5bccf9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e8ad9f8-a5c0-4992-b252-00ca50c053ae", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 59, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79c5bccf9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-c-f92aac29d7", ContainerID:"ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02", Pod:"calico-apiserver-79c5bccf9c-g8247", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid3a77c0b79a", MAC:"72:56:99:5f:c3:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 07:00:01.085930 containerd[1532]: 2025-11-24 07:00:01.055 [INFO][4508] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" Namespace="calico-apiserver" Pod="calico-apiserver-79c5bccf9c-g8247" WorkloadEndpoint="ci--4459.2.1--c--f92aac29d7-k8s-calico--apiserver--79c5bccf9c--g8247-eth0" Nov 24 07:00:01.093020 systemd-networkd[1416]: cali31332c0bc2c: Gained IPv6LL Nov 24 07:00:01.176486 containerd[1532]: time="2025-11-24T07:00:01.176193163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kv44w,Uid:70e145da-424d-4d20-b7bf-e0cf67bf5a55,Namespace:kube-system,Attempt:0,} returns sandbox id \"509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a\"" Nov 24 07:00:01.178434 kubelet[2708]: E1124 07:00:01.178395 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:01.191888 containerd[1532]: time="2025-11-24T07:00:01.189086314Z" level=info msg="CreateContainer within sandbox \"509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 07:00:01.207310 containerd[1532]: time="2025-11-24T07:00:01.207239782Z" level=info msg="connecting to shim ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02" address="unix:///run/containerd/s/888f100d9f6e5a516838f4c3510ca91aa42248a755496c2bc5296815c23fb5ac" namespace=k8s.io protocol=ttrpc version=3 Nov 24 07:00:01.217213 containerd[1532]: time="2025-11-24T07:00:01.217154078Z" level=info msg="Container b1e56140b44b9d2ac05338b0e74dde524cb9855fa942dc7579061967856b1a64: CDI devices from CRI Config.CDIDevices: []" Nov 24 07:00:01.226340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628089198.mount: Deactivated successfully. Nov 24 07:00:01.243896 containerd[1532]: time="2025-11-24T07:00:01.242384393Z" level=info msg="CreateContainer within sandbox \"509e2cb1fd2a36cf398a000cc5008de5da570ec05cb77b838aaccb85fe2faf1a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b1e56140b44b9d2ac05338b0e74dde524cb9855fa942dc7579061967856b1a64\"" Nov 24 07:00:01.247385 containerd[1532]: time="2025-11-24T07:00:01.247342173Z" level=info msg="StartContainer for \"b1e56140b44b9d2ac05338b0e74dde524cb9855fa942dc7579061967856b1a64\"" Nov 24 07:00:01.256776 containerd[1532]: time="2025-11-24T07:00:01.256270429Z" level=info msg="connecting to shim b1e56140b44b9d2ac05338b0e74dde524cb9855fa942dc7579061967856b1a64" address="unix:///run/containerd/s/8714fad121de2c27e3c4836977d9700ac53ca517df151ea8ffa8edc5bc1adb52" protocol=ttrpc version=3 Nov 24 07:00:01.327164 kubelet[2708]: E1124 07:00:01.327116 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:01.331471 systemd[1]: Started cri-containerd-b1e56140b44b9d2ac05338b0e74dde524cb9855fa942dc7579061967856b1a64.scope - libcontainer container b1e56140b44b9d2ac05338b0e74dde524cb9855fa942dc7579061967856b1a64. Nov 24 07:00:01.339787 kubelet[2708]: E1124 07:00:01.339569 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c87b844f6-pjvfq" podUID="e6c5825d-6ef5-4e3c-a5de-981230cfd835" Nov 24 07:00:01.358629 systemd[1]: Started cri-containerd-ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02.scope - libcontainer container ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02. Nov 24 07:00:01.435275 containerd[1532]: time="2025-11-24T07:00:01.433480816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:01.442766 containerd[1532]: time="2025-11-24T07:00:01.442680457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 07:00:01.443700 containerd[1532]: time="2025-11-24T07:00:01.443631613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 07:00:01.446001 kubelet[2708]: E1124 07:00:01.445787 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 07:00:01.446666 kubelet[2708]: E1124 07:00:01.446112 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 07:00:01.448787 kubelet[2708]: E1124 07:00:01.447397 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-jmszh_calico-system(31736d8f-1244-4ceb-aaba-f284117475ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:01.448787 kubelet[2708]: E1124 07:00:01.447492 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 07:00:01.462226 containerd[1532]: time="2025-11-24T07:00:01.462066532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589b8dfb96-w2xrv,Uid:9a41cd22-6538-4edb-a2fc-fe53fb988efb,Namespace:calico-system,Attempt:0,} returns sandbox id \"a37e49ca71e5c4eaebf2014c5b74a97bc965200ad9853d053d0634f63fce7295\"" Nov 24 07:00:01.471864 containerd[1532]: time="2025-11-24T07:00:01.471764292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 07:00:01.516053 containerd[1532]: time="2025-11-24T07:00:01.515971266Z" level=info msg="StartContainer for \"b1e56140b44b9d2ac05338b0e74dde524cb9855fa942dc7579061967856b1a64\" returns successfully" Nov 24 07:00:01.950776 containerd[1532]: time="2025-11-24T07:00:01.950704018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79c5bccf9c-g8247,Uid:7e8ad9f8-a5c0-4992-b252-00ca50c053ae,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ca660ed1aa276693853cc76358453987f9664e4afa13d9cf076afff525c67a02\"" Nov 24 07:00:02.041267 kubelet[2708]: E1124 07:00:02.041189 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:02.349953 kubelet[2708]: E1124 07:00:02.349782 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:02.368850 kubelet[2708]: E1124 07:00:02.368599 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 07:00:02.494544 containerd[1532]: time="2025-11-24T07:00:02.494085211Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:02.499437 containerd[1532]: time="2025-11-24T07:00:02.499178460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 07:00:02.500065 containerd[1532]: time="2025-11-24T07:00:02.499266455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 07:00:02.502613 kubelet[2708]: I1124 07:00:02.502433 2708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kv44w" podStartSLOduration=47.502352248 podStartE2EDuration="47.502352248s" podCreationTimestamp="2025-11-24 06:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 07:00:02.424825318 +0000 UTC m=+52.703772589" watchObservedRunningTime="2025-11-24 07:00:02.502352248 +0000 UTC m=+52.781299540" Nov 24 07:00:02.506262 systemd-networkd[1416]: cali9467b145a8d: Gained IPv6LL Nov 24 07:00:02.508276 systemd-networkd[1416]: calid3a77c0b79a: Gained IPv6LL Nov 24 07:00:02.510105 kubelet[2708]: E1124 07:00:02.509548 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 07:00:02.510105 kubelet[2708]: E1124 07:00:02.509659 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 07:00:02.512967 kubelet[2708]: E1124 07:00:02.510732 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-589b8dfb96-w2xrv_calico-system(9a41cd22-6538-4edb-a2fc-fe53fb988efb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:02.512967 kubelet[2708]: E1124 07:00:02.510813 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589b8dfb96-w2xrv" podUID="9a41cd22-6538-4edb-a2fc-fe53fb988efb" Nov 24 07:00:02.513344 containerd[1532]: time="2025-11-24T07:00:02.510812032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:00:02.696182 systemd-networkd[1416]: calibb4c8d5ad37: Gained IPv6LL Nov 24 07:00:03.367624 kubelet[2708]: E1124 07:00:03.366879 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:03.369154 kubelet[2708]: E1124 07:00:03.369007 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589b8dfb96-w2xrv" podUID="9a41cd22-6538-4edb-a2fc-fe53fb988efb" Nov 24 07:00:03.701462 containerd[1532]: time="2025-11-24T07:00:03.701092904Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:03.705521 containerd[1532]: time="2025-11-24T07:00:03.705244167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:00:03.705521 containerd[1532]: time="2025-11-24T07:00:03.705277012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:00:03.706740 kubelet[2708]: E1124 07:00:03.706426 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:03.706740 kubelet[2708]: E1124 07:00:03.706497 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:03.707390 kubelet[2708]: E1124 07:00:03.707234 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79c5bccf9c-g8247_calico-apiserver(7e8ad9f8-a5c0-4992-b252-00ca50c053ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:03.707390 kubelet[2708]: E1124 07:00:03.707311 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-g8247" podUID="7e8ad9f8-a5c0-4992-b252-00ca50c053ae" Nov 24 07:00:04.372784 kubelet[2708]: E1124 07:00:04.372591 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:04.374771 kubelet[2708]: E1124 07:00:04.374121 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-g8247" podUID="7e8ad9f8-a5c0-4992-b252-00ca50c053ae" Nov 24 07:00:08.922810 containerd[1532]: time="2025-11-24T07:00:08.922631280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 07:00:09.560327 containerd[1532]: time="2025-11-24T07:00:09.560210647Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:09.561362 containerd[1532]: time="2025-11-24T07:00:09.561201689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 07:00:09.561362 containerd[1532]: time="2025-11-24T07:00:09.561234520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 07:00:09.562754 kubelet[2708]: E1124 07:00:09.562012 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 07:00:09.562754 kubelet[2708]: E1124 07:00:09.562089 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 07:00:09.562754 kubelet[2708]: E1124 07:00:09.562236 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6f49b47ccf-8dgfn_calico-system(43c71191-e8bd-4deb-bacd-b63f93810870): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:09.565584 containerd[1532]: time="2025-11-24T07:00:09.565310092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 07:00:10.800661 containerd[1532]: time="2025-11-24T07:00:10.800471728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:10.802638 containerd[1532]: time="2025-11-24T07:00:10.802408092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 07:00:10.802638 containerd[1532]: time="2025-11-24T07:00:10.802467318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 07:00:10.803165 kubelet[2708]: E1124 07:00:10.803099 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 07:00:10.804901 kubelet[2708]: E1124 07:00:10.803171 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 07:00:10.804901 kubelet[2708]: E1124 07:00:10.803273 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6f49b47ccf-8dgfn_calico-system(43c71191-e8bd-4deb-bacd-b63f93810870): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:10.804901 kubelet[2708]: E1124 07:00:10.803332 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f49b47ccf-8dgfn" podUID="43c71191-e8bd-4deb-bacd-b63f93810870" Nov 24 07:00:10.925628 containerd[1532]: time="2025-11-24T07:00:10.925491836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 07:00:11.560538 containerd[1532]: time="2025-11-24T07:00:11.560293108Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:11.561522 containerd[1532]: time="2025-11-24T07:00:11.561445685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 07:00:11.561813 containerd[1532]: time="2025-11-24T07:00:11.561516440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 07:00:11.562145 kubelet[2708]: E1124 07:00:11.562045 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 07:00:11.562145 kubelet[2708]: E1124 07:00:11.562117 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 07:00:11.563148 kubelet[2708]: E1124 07:00:11.563103 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xx5mj_calico-system(65e83e8e-3217-4c5d-9dc8-a1e6cab084a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:11.563236 kubelet[2708]: E1124 07:00:11.563154 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xx5mj" podUID="65e83e8e-3217-4c5d-9dc8-a1e6cab084a7" Nov 24 07:00:11.563633 containerd[1532]: time="2025-11-24T07:00:11.563569628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:00:12.362298 containerd[1532]: time="2025-11-24T07:00:12.362196926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:12.389174 containerd[1532]: time="2025-11-24T07:00:12.389047298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:00:12.389395 containerd[1532]: time="2025-11-24T07:00:12.389194825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:00:12.389462 kubelet[2708]: E1124 07:00:12.389363 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:12.389462 kubelet[2708]: E1124 07:00:12.389414 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:12.389921 kubelet[2708]: E1124 07:00:12.389500 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79c5bccf9c-vgh4k_calico-apiserver(688d478f-9397-4780-ac83-825ed42a52b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:12.389921 kubelet[2708]: E1124 07:00:12.389540 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-vgh4k" podUID="688d478f-9397-4780-ac83-825ed42a52b7" Nov 24 07:00:13.931833 containerd[1532]: time="2025-11-24T07:00:13.931085603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:00:14.556115 containerd[1532]: time="2025-11-24T07:00:14.555841287Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:14.560542 containerd[1532]: time="2025-11-24T07:00:14.560430038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:00:14.561998 containerd[1532]: time="2025-11-24T07:00:14.560482530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:00:14.562077 kubelet[2708]: E1124 07:00:14.561083 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:14.562077 kubelet[2708]: E1124 07:00:14.561165 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:14.563015 kubelet[2708]: E1124 07:00:14.562878 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c87b844f6-pjvfq_calico-apiserver(e6c5825d-6ef5-4e3c-a5de-981230cfd835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:14.563145 kubelet[2708]: E1124 07:00:14.563032 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c87b844f6-pjvfq" podUID="e6c5825d-6ef5-4e3c-a5de-981230cfd835" Nov 24 07:00:14.923092 containerd[1532]: time="2025-11-24T07:00:14.923049737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 07:00:15.604691 containerd[1532]: time="2025-11-24T07:00:15.604622684Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:15.606757 containerd[1532]: time="2025-11-24T07:00:15.606633338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 07:00:15.607182 containerd[1532]: time="2025-11-24T07:00:15.606670437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 07:00:15.607270 kubelet[2708]: E1124 07:00:15.607145 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 07:00:15.607270 kubelet[2708]: E1124 07:00:15.607199 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 07:00:15.608997 kubelet[2708]: E1124 07:00:15.608946 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-jmszh_calico-system(31736d8f-1244-4ceb-aaba-f284117475ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:15.611949 containerd[1532]: time="2025-11-24T07:00:15.611887832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 07:00:16.439350 containerd[1532]: time="2025-11-24T07:00:16.439117387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:16.440217 containerd[1532]: time="2025-11-24T07:00:16.440081652Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 07:00:16.440503 containerd[1532]: time="2025-11-24T07:00:16.440196450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 07:00:16.440827 kubelet[2708]: E1124 07:00:16.440772 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 07:00:16.440908 kubelet[2708]: E1124 07:00:16.440841 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 07:00:16.441582 kubelet[2708]: E1124 07:00:16.441121 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-jmszh_calico-system(31736d8f-1244-4ceb-aaba-f284117475ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:16.441582 kubelet[2708]: E1124 07:00:16.441175 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 07:00:16.442092 containerd[1532]: time="2025-11-24T07:00:16.441304939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 07:00:17.112956 containerd[1532]: time="2025-11-24T07:00:17.112693506Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:17.114859 containerd[1532]: time="2025-11-24T07:00:17.114691982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 07:00:17.114859 containerd[1532]: time="2025-11-24T07:00:17.114828671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 07:00:17.115299 kubelet[2708]: E1124 07:00:17.115209 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 07:00:17.115299 kubelet[2708]: E1124 07:00:17.115281 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 07:00:17.117087 kubelet[2708]: E1124 07:00:17.116937 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-589b8dfb96-w2xrv_calico-system(9a41cd22-6538-4edb-a2fc-fe53fb988efb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:17.117087 kubelet[2708]: E1124 07:00:17.117004 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589b8dfb96-w2xrv" podUID="9a41cd22-6538-4edb-a2fc-fe53fb988efb" Nov 24 07:00:18.925366 containerd[1532]: time="2025-11-24T07:00:18.925303400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:00:19.374243 containerd[1532]: time="2025-11-24T07:00:19.374168300Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:19.375168 containerd[1532]: time="2025-11-24T07:00:19.375114683Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:00:19.375291 containerd[1532]: time="2025-11-24T07:00:19.375168590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:00:19.375742 kubelet[2708]: E1124 07:00:19.375634 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:19.376803 kubelet[2708]: E1124 07:00:19.375851 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:19.376803 kubelet[2708]: E1124 07:00:19.376137 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79c5bccf9c-g8247_calico-apiserver(7e8ad9f8-a5c0-4992-b252-00ca50c053ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:19.376803 kubelet[2708]: E1124 07:00:19.376177 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-g8247" podUID="7e8ad9f8-a5c0-4992-b252-00ca50c053ae" Nov 24 07:00:23.925424 kubelet[2708]: E1124 07:00:23.925345 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xx5mj" podUID="65e83e8e-3217-4c5d-9dc8-a1e6cab084a7" Nov 24 07:00:24.924331 kubelet[2708]: E1124 07:00:24.924236 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f49b47ccf-8dgfn" podUID="43c71191-e8bd-4deb-bacd-b63f93810870" Nov 24 07:00:25.924392 kubelet[2708]: E1124 07:00:25.923844 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c87b844f6-pjvfq" podUID="e6c5825d-6ef5-4e3c-a5de-981230cfd835" Nov 24 07:00:26.924040 kubelet[2708]: E1124 07:00:26.923919 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-vgh4k" podUID="688d478f-9397-4780-ac83-825ed42a52b7" Nov 24 07:00:28.923443 kubelet[2708]: E1124 07:00:28.923367 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 07:00:31.920641 kubelet[2708]: E1124 07:00:31.920533 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:31.924499 kubelet[2708]: E1124 07:00:31.924027 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589b8dfb96-w2xrv" podUID="9a41cd22-6538-4edb-a2fc-fe53fb988efb" Nov 24 07:00:33.504689 systemd[1]: Started sshd@7-164.90.155.191:22-139.178.68.195:60140.service - OpenSSH per-connection server daemon (139.178.68.195:60140). Nov 24 07:00:33.675383 sshd[4880]: Accepted publickey for core from 139.178.68.195 port 60140 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:00:33.680542 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:00:33.692209 systemd-logind[1496]: New session 8 of user core. Nov 24 07:00:33.698836 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 07:00:33.922920 kubelet[2708]: E1124 07:00:33.922402 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:34.397504 sshd[4886]: Connection closed by 139.178.68.195 port 60140 Nov 24 07:00:34.398806 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Nov 24 07:00:34.415079 systemd[1]: sshd@7-164.90.155.191:22-139.178.68.195:60140.service: Deactivated successfully. Nov 24 07:00:34.423578 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 07:00:34.428180 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Nov 24 07:00:34.431936 systemd-logind[1496]: Removed session 8. Nov 24 07:00:34.921554 kubelet[2708]: E1124 07:00:34.921447 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-g8247" podUID="7e8ad9f8-a5c0-4992-b252-00ca50c053ae" Nov 24 07:00:36.920491 kubelet[2708]: E1124 07:00:36.920113 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:36.923283 containerd[1532]: time="2025-11-24T07:00:36.923202083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 07:00:37.255197 containerd[1532]: time="2025-11-24T07:00:37.254871065Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:37.257313 containerd[1532]: time="2025-11-24T07:00:37.257092729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 07:00:37.259756 containerd[1532]: time="2025-11-24T07:00:37.257290258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 07:00:37.260135 kubelet[2708]: E1124 07:00:37.260079 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 07:00:37.260367 kubelet[2708]: E1124 07:00:37.260264 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 07:00:37.269011 kubelet[2708]: E1124 07:00:37.268773 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xx5mj_calico-system(65e83e8e-3217-4c5d-9dc8-a1e6cab084a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:37.269011 kubelet[2708]: E1124 07:00:37.268840 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xx5mj" podUID="65e83e8e-3217-4c5d-9dc8-a1e6cab084a7" Nov 24 07:00:37.926745 containerd[1532]: time="2025-11-24T07:00:37.926042247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 07:00:38.280561 containerd[1532]: time="2025-11-24T07:00:38.280223639Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:38.282485 containerd[1532]: time="2025-11-24T07:00:38.282010610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 07:00:38.282485 containerd[1532]: time="2025-11-24T07:00:38.282141979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 07:00:38.282949 kubelet[2708]: E1124 07:00:38.282897 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 07:00:38.285443 kubelet[2708]: E1124 07:00:38.285159 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 07:00:38.287086 kubelet[2708]: E1124 07:00:38.286120 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6f49b47ccf-8dgfn_calico-system(43c71191-e8bd-4deb-bacd-b63f93810870): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:38.287252 containerd[1532]: time="2025-11-24T07:00:38.286869543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:00:38.689223 containerd[1532]: time="2025-11-24T07:00:38.689157289Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:38.690796 containerd[1532]: time="2025-11-24T07:00:38.690541702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:00:38.690796 containerd[1532]: time="2025-11-24T07:00:38.690676408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:00:38.691514 kubelet[2708]: E1124 07:00:38.691400 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:38.691760 kubelet[2708]: E1124 07:00:38.691679 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:38.692677 kubelet[2708]: E1124 07:00:38.692480 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c87b844f6-pjvfq_calico-apiserver(e6c5825d-6ef5-4e3c-a5de-981230cfd835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:38.692677 kubelet[2708]: E1124 07:00:38.692565 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c87b844f6-pjvfq" podUID="e6c5825d-6ef5-4e3c-a5de-981230cfd835" Nov 24 07:00:38.693362 containerd[1532]: time="2025-11-24T07:00:38.693309460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 07:00:39.051185 containerd[1532]: time="2025-11-24T07:00:39.050886416Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:39.052277 containerd[1532]: time="2025-11-24T07:00:39.051809553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 07:00:39.052277 containerd[1532]: time="2025-11-24T07:00:39.051919178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 07:00:39.052874 kubelet[2708]: E1124 07:00:39.052591 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 07:00:39.052874 kubelet[2708]: E1124 07:00:39.052661 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 07:00:39.054593 kubelet[2708]: E1124 07:00:39.054564 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6f49b47ccf-8dgfn_calico-system(43c71191-e8bd-4deb-bacd-b63f93810870): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:39.055774 kubelet[2708]: E1124 07:00:39.054743 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f49b47ccf-8dgfn" podUID="43c71191-e8bd-4deb-bacd-b63f93810870" Nov 24 07:00:39.426823 systemd[1]: Started sshd@8-164.90.155.191:22-139.178.68.195:60144.service - OpenSSH per-connection server daemon (139.178.68.195:60144). Nov 24 07:00:39.537776 sshd[4918]: Accepted publickey for core from 139.178.68.195 port 60144 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:00:39.540623 sshd-session[4918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:00:39.553822 systemd-logind[1496]: New session 9 of user core. Nov 24 07:00:39.560072 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 07:00:39.814285 sshd[4921]: Connection closed by 139.178.68.195 port 60144 Nov 24 07:00:39.815102 sshd-session[4918]: pam_unix(sshd:session): session closed for user core Nov 24 07:00:39.821123 systemd[1]: sshd@8-164.90.155.191:22-139.178.68.195:60144.service: Deactivated successfully. Nov 24 07:00:39.826402 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 07:00:39.828789 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Nov 24 07:00:39.832701 systemd-logind[1496]: Removed session 9. Nov 24 07:00:39.923338 kubelet[2708]: E1124 07:00:39.922954 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:40.924005 containerd[1532]: time="2025-11-24T07:00:40.923959650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:00:41.252768 containerd[1532]: time="2025-11-24T07:00:41.252479620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:41.253604 containerd[1532]: time="2025-11-24T07:00:41.253331015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:00:41.253604 containerd[1532]: time="2025-11-24T07:00:41.253432354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:00:41.253783 kubelet[2708]: E1124 07:00:41.253700 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:41.254478 kubelet[2708]: E1124 07:00:41.253798 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:41.254794 kubelet[2708]: E1124 07:00:41.254758 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79c5bccf9c-vgh4k_calico-apiserver(688d478f-9397-4780-ac83-825ed42a52b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:41.254859 kubelet[2708]: E1124 07:00:41.254816 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-vgh4k" podUID="688d478f-9397-4780-ac83-825ed42a52b7" Nov 24 07:00:43.927554 containerd[1532]: time="2025-11-24T07:00:43.926912510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 07:00:44.289149 containerd[1532]: time="2025-11-24T07:00:44.288989882Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:44.290726 containerd[1532]: time="2025-11-24T07:00:44.290379554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 07:00:44.290726 containerd[1532]: time="2025-11-24T07:00:44.290505148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 07:00:44.290933 kubelet[2708]: E1124 07:00:44.290787 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 07:00:44.290933 kubelet[2708]: E1124 07:00:44.290870 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 07:00:44.291352 kubelet[2708]: E1124 07:00:44.291027 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-jmszh_calico-system(31736d8f-1244-4ceb-aaba-f284117475ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:44.294725 containerd[1532]: time="2025-11-24T07:00:44.294656033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 07:00:44.605417 containerd[1532]: time="2025-11-24T07:00:44.605207188Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:44.606187 containerd[1532]: time="2025-11-24T07:00:44.606130024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 07:00:44.606328 containerd[1532]: time="2025-11-24T07:00:44.606236678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 07:00:44.608748 kubelet[2708]: E1124 07:00:44.607895 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 07:00:44.608748 kubelet[2708]: E1124 07:00:44.607971 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 07:00:44.608748 kubelet[2708]: E1124 07:00:44.608107 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-jmszh_calico-system(31736d8f-1244-4ceb-aaba-f284117475ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:44.609077 kubelet[2708]: E1124 07:00:44.608169 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 07:00:44.835113 systemd[1]: Started sshd@9-164.90.155.191:22-139.178.68.195:39828.service - OpenSSH per-connection server daemon (139.178.68.195:39828). Nov 24 07:00:44.922671 containerd[1532]: time="2025-11-24T07:00:44.922616679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 07:00:45.011451 sshd[4937]: Accepted publickey for core from 139.178.68.195 port 39828 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:00:45.015686 sshd-session[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:00:45.025028 systemd-logind[1496]: New session 10 of user core. Nov 24 07:00:45.031358 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 07:00:45.283599 containerd[1532]: time="2025-11-24T07:00:45.283427777Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:45.284752 containerd[1532]: time="2025-11-24T07:00:45.284665483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 07:00:45.284979 containerd[1532]: time="2025-11-24T07:00:45.284858395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 07:00:45.285915 kubelet[2708]: E1124 07:00:45.285835 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 07:00:45.285915 kubelet[2708]: E1124 07:00:45.285915 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 07:00:45.286278 kubelet[2708]: E1124 07:00:45.286033 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-589b8dfb96-w2xrv_calico-system(9a41cd22-6538-4edb-a2fc-fe53fb988efb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:45.286278 kubelet[2708]: E1124 07:00:45.286086 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589b8dfb96-w2xrv" podUID="9a41cd22-6538-4edb-a2fc-fe53fb988efb" Nov 24 07:00:45.373099 sshd[4940]: Connection closed by 139.178.68.195 port 39828 Nov 24 07:00:45.380438 sshd-session[4937]: pam_unix(sshd:session): session closed for user core Nov 24 07:00:45.400035 systemd[1]: sshd@9-164.90.155.191:22-139.178.68.195:39828.service: Deactivated successfully. Nov 24 07:00:45.400551 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Nov 24 07:00:45.406159 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 07:00:45.417089 systemd[1]: Started sshd@10-164.90.155.191:22-139.178.68.195:39840.service - OpenSSH per-connection server daemon (139.178.68.195:39840). Nov 24 07:00:45.418822 systemd-logind[1496]: Removed session 10. Nov 24 07:00:45.510039 sshd[4953]: Accepted publickey for core from 139.178.68.195 port 39840 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:00:45.512517 sshd-session[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:00:45.523873 systemd-logind[1496]: New session 11 of user core. Nov 24 07:00:45.530029 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 07:00:45.900628 sshd[4956]: Connection closed by 139.178.68.195 port 39840 Nov 24 07:00:45.902978 sshd-session[4953]: pam_unix(sshd:session): session closed for user core Nov 24 07:00:45.918146 systemd[1]: sshd@10-164.90.155.191:22-139.178.68.195:39840.service: Deactivated successfully. Nov 24 07:00:45.923003 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 07:00:45.926150 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Nov 24 07:00:45.934787 systemd[1]: Started sshd@11-164.90.155.191:22-139.178.68.195:39844.service - OpenSSH per-connection server daemon (139.178.68.195:39844). Nov 24 07:00:45.937949 systemd-logind[1496]: Removed session 11. Nov 24 07:00:46.044755 sshd[4965]: Accepted publickey for core from 139.178.68.195 port 39844 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:00:46.046940 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:00:46.064892 systemd-logind[1496]: New session 12 of user core. Nov 24 07:00:46.068959 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 07:00:46.299021 sshd[4968]: Connection closed by 139.178.68.195 port 39844 Nov 24 07:00:46.301103 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Nov 24 07:00:46.314297 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Nov 24 07:00:46.314626 systemd[1]: sshd@11-164.90.155.191:22-139.178.68.195:39844.service: Deactivated successfully. Nov 24 07:00:46.321683 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 07:00:46.326881 systemd-logind[1496]: Removed session 12. Nov 24 07:00:47.925704 containerd[1532]: time="2025-11-24T07:00:47.925368204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:00:48.269880 containerd[1532]: time="2025-11-24T07:00:48.269269646Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:00:48.271537 containerd[1532]: time="2025-11-24T07:00:48.271468117Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:00:48.271963 containerd[1532]: time="2025-11-24T07:00:48.271516358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:00:48.272315 kubelet[2708]: E1124 07:00:48.272114 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:48.272315 kubelet[2708]: E1124 07:00:48.272197 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:00:48.273285 kubelet[2708]: E1124 07:00:48.273129 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-79c5bccf9c-g8247_calico-apiserver(7e8ad9f8-a5c0-4992-b252-00ca50c053ae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:00:48.275151 kubelet[2708]: E1124 07:00:48.274918 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-g8247" podUID="7e8ad9f8-a5c0-4992-b252-00ca50c053ae" Nov 24 07:00:49.924763 kubelet[2708]: E1124 07:00:49.924600 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f49b47ccf-8dgfn" podUID="43c71191-e8bd-4deb-bacd-b63f93810870" Nov 24 07:00:49.927302 kubelet[2708]: E1124 07:00:49.925489 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xx5mj" podUID="65e83e8e-3217-4c5d-9dc8-a1e6cab084a7" Nov 24 07:00:49.927302 kubelet[2708]: E1124 07:00:49.925554 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c87b844f6-pjvfq" podUID="e6c5825d-6ef5-4e3c-a5de-981230cfd835" Nov 24 07:00:51.319197 systemd[1]: Started sshd@12-164.90.155.191:22-139.178.68.195:56574.service - OpenSSH per-connection server daemon (139.178.68.195:56574). Nov 24 07:00:51.411236 sshd[4984]: Accepted publickey for core from 139.178.68.195 port 56574 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:00:51.414664 sshd-session[4984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:00:51.426741 systemd-logind[1496]: New session 13 of user core. Nov 24 07:00:51.432461 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 07:00:51.697751 sshd[4987]: Connection closed by 139.178.68.195 port 56574 Nov 24 07:00:51.698959 sshd-session[4984]: pam_unix(sshd:session): session closed for user core Nov 24 07:00:51.707440 systemd[1]: sshd@12-164.90.155.191:22-139.178.68.195:56574.service: Deactivated successfully. Nov 24 07:00:51.714350 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 07:00:51.717778 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Nov 24 07:00:51.723318 systemd-logind[1496]: Removed session 13. Nov 24 07:00:51.924764 kubelet[2708]: E1124 07:00:51.924156 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:00:52.923418 kubelet[2708]: E1124 07:00:52.922952 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-vgh4k" podUID="688d478f-9397-4780-ac83-825ed42a52b7" Nov 24 07:00:56.719910 systemd[1]: Started sshd@13-164.90.155.191:22-139.178.68.195:56590.service - OpenSSH per-connection server daemon (139.178.68.195:56590). Nov 24 07:00:56.805390 sshd[5002]: Accepted publickey for core from 139.178.68.195 port 56590 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:00:56.809113 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:00:56.817371 systemd-logind[1496]: New session 14 of user core. Nov 24 07:00:56.825531 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 07:00:57.029276 sshd[5005]: Connection closed by 139.178.68.195 port 56590 Nov 24 07:00:57.029749 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Nov 24 07:00:57.036095 systemd[1]: sshd@13-164.90.155.191:22-139.178.68.195:56590.service: Deactivated successfully. Nov 24 07:00:57.043001 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 07:00:57.044809 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Nov 24 07:00:57.050750 systemd-logind[1496]: Removed session 14. Nov 24 07:00:57.928790 kubelet[2708]: E1124 07:00:57.928482 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 07:00:59.923430 kubelet[2708]: E1124 07:00:59.923208 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-g8247" podUID="7e8ad9f8-a5c0-4992-b252-00ca50c053ae" Nov 24 07:01:00.922568 kubelet[2708]: E1124 07:01:00.921831 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xx5mj" podUID="65e83e8e-3217-4c5d-9dc8-a1e6cab084a7" Nov 24 07:01:00.922805 kubelet[2708]: E1124 07:01:00.922699 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589b8dfb96-w2xrv" podUID="9a41cd22-6538-4edb-a2fc-fe53fb988efb" Nov 24 07:01:01.930168 kubelet[2708]: E1124 07:01:01.929798 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c87b844f6-pjvfq" podUID="e6c5825d-6ef5-4e3c-a5de-981230cfd835" Nov 24 07:01:02.047822 systemd[1]: Started sshd@14-164.90.155.191:22-139.178.68.195:54286.service - OpenSSH per-connection server daemon (139.178.68.195:54286). Nov 24 07:01:02.136526 sshd[5018]: Accepted publickey for core from 139.178.68.195 port 54286 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:01:02.138950 sshd-session[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:01:02.147556 systemd-logind[1496]: New session 15 of user core. Nov 24 07:01:02.154365 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 07:01:02.391454 sshd[5045]: Connection closed by 139.178.68.195 port 54286 Nov 24 07:01:02.391017 sshd-session[5018]: pam_unix(sshd:session): session closed for user core Nov 24 07:01:02.400212 systemd[1]: sshd@14-164.90.155.191:22-139.178.68.195:54286.service: Deactivated successfully. Nov 24 07:01:02.405431 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 07:01:02.406906 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Nov 24 07:01:02.409724 systemd-logind[1496]: Removed session 15. Nov 24 07:01:02.925812 kubelet[2708]: E1124 07:01:02.925701 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f49b47ccf-8dgfn" podUID="43c71191-e8bd-4deb-bacd-b63f93810870" Nov 24 07:01:03.923445 kubelet[2708]: E1124 07:01:03.922787 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:06.922944 kubelet[2708]: E1124 07:01:06.922868 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-vgh4k" podUID="688d478f-9397-4780-ac83-825ed42a52b7" Nov 24 07:01:07.411786 systemd[1]: Started sshd@15-164.90.155.191:22-139.178.68.195:54298.service - OpenSSH per-connection server daemon (139.178.68.195:54298). Nov 24 07:01:07.500941 sshd[5057]: Accepted publickey for core from 139.178.68.195 port 54298 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:01:07.504324 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:01:07.514794 systemd-logind[1496]: New session 16 of user core. Nov 24 07:01:07.520037 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 07:01:07.702822 sshd[5060]: Connection closed by 139.178.68.195 port 54298 Nov 24 07:01:07.702453 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Nov 24 07:01:07.717823 systemd[1]: sshd@15-164.90.155.191:22-139.178.68.195:54298.service: Deactivated successfully. Nov 24 07:01:07.721647 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 07:01:07.724811 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Nov 24 07:01:07.728049 systemd[1]: Started sshd@16-164.90.155.191:22-139.178.68.195:54306.service - OpenSSH per-connection server daemon (139.178.68.195:54306). Nov 24 07:01:07.732251 systemd-logind[1496]: Removed session 16. Nov 24 07:01:07.815220 sshd[5072]: Accepted publickey for core from 139.178.68.195 port 54306 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:01:07.818517 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:01:07.828799 systemd-logind[1496]: New session 17 of user core. Nov 24 07:01:07.837007 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 07:01:08.216698 sshd[5075]: Connection closed by 139.178.68.195 port 54306 Nov 24 07:01:08.220526 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Nov 24 07:01:08.237477 systemd[1]: Started sshd@17-164.90.155.191:22-139.178.68.195:54316.service - OpenSSH per-connection server daemon (139.178.68.195:54316). Nov 24 07:01:08.239926 systemd[1]: sshd@16-164.90.155.191:22-139.178.68.195:54306.service: Deactivated successfully. Nov 24 07:01:08.248619 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 07:01:08.252599 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Nov 24 07:01:08.259422 systemd-logind[1496]: Removed session 17. Nov 24 07:01:08.381003 sshd[5082]: Accepted publickey for core from 139.178.68.195 port 54316 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:01:08.381824 sshd-session[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:01:08.392812 systemd-logind[1496]: New session 18 of user core. Nov 24 07:01:08.399223 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 07:01:09.282248 sshd[5088]: Connection closed by 139.178.68.195 port 54316 Nov 24 07:01:09.283315 sshd-session[5082]: pam_unix(sshd:session): session closed for user core Nov 24 07:01:09.296352 systemd[1]: sshd@17-164.90.155.191:22-139.178.68.195:54316.service: Deactivated successfully. Nov 24 07:01:09.302782 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 07:01:09.305497 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Nov 24 07:01:09.313455 systemd[1]: Started sshd@18-164.90.155.191:22-139.178.68.195:54332.service - OpenSSH per-connection server daemon (139.178.68.195:54332). Nov 24 07:01:09.315460 systemd-logind[1496]: Removed session 18. Nov 24 07:01:09.410585 sshd[5102]: Accepted publickey for core from 139.178.68.195 port 54332 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:01:09.412572 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:01:09.419497 systemd-logind[1496]: New session 19 of user core. Nov 24 07:01:09.427286 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 07:01:09.880765 sshd[5106]: Connection closed by 139.178.68.195 port 54332 Nov 24 07:01:09.879336 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Nov 24 07:01:09.895091 systemd[1]: Started sshd@19-164.90.155.191:22-139.178.68.195:54344.service - OpenSSH per-connection server daemon (139.178.68.195:54344). Nov 24 07:01:09.897806 systemd[1]: sshd@18-164.90.155.191:22-139.178.68.195:54332.service: Deactivated successfully. Nov 24 07:01:09.902323 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 07:01:09.907210 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Nov 24 07:01:09.913136 systemd-logind[1496]: Removed session 19. Nov 24 07:01:09.977270 sshd[5113]: Accepted publickey for core from 139.178.68.195 port 54344 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:01:09.981423 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:01:09.989978 systemd-logind[1496]: New session 20 of user core. Nov 24 07:01:09.997241 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 07:01:10.213075 sshd[5121]: Connection closed by 139.178.68.195 port 54344 Nov 24 07:01:10.214455 sshd-session[5113]: pam_unix(sshd:session): session closed for user core Nov 24 07:01:10.222269 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Nov 24 07:01:10.223113 systemd[1]: sshd@19-164.90.155.191:22-139.178.68.195:54344.service: Deactivated successfully. Nov 24 07:01:10.227562 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 07:01:10.233840 systemd-logind[1496]: Removed session 20. Nov 24 07:01:10.926447 kubelet[2708]: E1124 07:01:10.926373 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 07:01:11.923753 kubelet[2708]: E1124 07:01:11.923465 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-g8247" podUID="7e8ad9f8-a5c0-4992-b252-00ca50c053ae" Nov 24 07:01:12.922440 kubelet[2708]: E1124 07:01:12.921848 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c87b844f6-pjvfq" podUID="e6c5825d-6ef5-4e3c-a5de-981230cfd835" Nov 24 07:01:13.923607 kubelet[2708]: E1124 07:01:13.923482 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589b8dfb96-w2xrv" podUID="9a41cd22-6538-4edb-a2fc-fe53fb988efb" Nov 24 07:01:14.922755 kubelet[2708]: E1124 07:01:14.922609 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:15.229554 systemd[1]: Started sshd@20-164.90.155.191:22-139.178.68.195:36388.service - OpenSSH per-connection server daemon (139.178.68.195:36388). Nov 24 07:01:15.309370 sshd[5135]: Accepted publickey for core from 139.178.68.195 port 36388 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:01:15.312493 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:01:15.324472 systemd-logind[1496]: New session 21 of user core. Nov 24 07:01:15.331026 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 24 07:01:15.544845 sshd[5138]: Connection closed by 139.178.68.195 port 36388 Nov 24 07:01:15.546448 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Nov 24 07:01:15.553573 systemd[1]: sshd@20-164.90.155.191:22-139.178.68.195:36388.service: Deactivated successfully. Nov 24 07:01:15.560872 systemd[1]: session-21.scope: Deactivated successfully. Nov 24 07:01:15.569944 systemd-logind[1496]: Session 21 logged out. Waiting for processes to exit. Nov 24 07:01:15.571488 systemd-logind[1496]: Removed session 21. Nov 24 07:01:15.931228 kubelet[2708]: E1124 07:01:15.931154 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xx5mj" podUID="65e83e8e-3217-4c5d-9dc8-a1e6cab084a7" Nov 24 07:01:16.924667 kubelet[2708]: E1124 07:01:16.924567 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f49b47ccf-8dgfn" podUID="43c71191-e8bd-4deb-bacd-b63f93810870" Nov 24 07:01:17.923360 kubelet[2708]: E1124 07:01:17.923275 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-vgh4k" podUID="688d478f-9397-4780-ac83-825ed42a52b7" Nov 24 07:01:20.563310 systemd[1]: Started sshd@21-164.90.155.191:22-139.178.68.195:33322.service - OpenSSH per-connection server daemon (139.178.68.195:33322). Nov 24 07:01:20.658394 sshd[5160]: Accepted publickey for core from 139.178.68.195 port 33322 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:01:20.661426 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:01:20.670065 systemd-logind[1496]: New session 22 of user core. Nov 24 07:01:20.675268 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 24 07:01:20.920891 sshd[5163]: Connection closed by 139.178.68.195 port 33322 Nov 24 07:01:20.918772 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Nov 24 07:01:20.925554 systemd-logind[1496]: Session 22 logged out. Waiting for processes to exit. Nov 24 07:01:20.926405 systemd[1]: sshd@21-164.90.155.191:22-139.178.68.195:33322.service: Deactivated successfully. Nov 24 07:01:20.933874 systemd[1]: session-22.scope: Deactivated successfully. Nov 24 07:01:20.942564 systemd-logind[1496]: Removed session 22. Nov 24 07:01:22.924892 kubelet[2708]: E1124 07:01:22.924788 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jmszh" podUID="31736d8f-1244-4ceb-aaba-f284117475ca" Nov 24 07:01:23.923644 containerd[1532]: time="2025-11-24T07:01:23.923602151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 07:01:24.260775 containerd[1532]: time="2025-11-24T07:01:24.259885774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:24.261525 containerd[1532]: time="2025-11-24T07:01:24.261382359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 07:01:24.261525 containerd[1532]: time="2025-11-24T07:01:24.261483749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 07:01:24.261733 kubelet[2708]: E1124 07:01:24.261660 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:01:24.261733 kubelet[2708]: E1124 07:01:24.261724 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 07:01:24.262081 kubelet[2708]: E1124 07:01:24.261809 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c87b844f6-pjvfq_calico-apiserver(e6c5825d-6ef5-4e3c-a5de-981230cfd835): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:24.262081 kubelet[2708]: E1124 07:01:24.261849 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c87b844f6-pjvfq" podUID="e6c5825d-6ef5-4e3c-a5de-981230cfd835" Nov 24 07:01:25.933383 kubelet[2708]: E1124 07:01:25.933335 2708 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 24 07:01:25.942420 systemd[1]: Started sshd@22-164.90.155.191:22-139.178.68.195:33338.service - OpenSSH per-connection server daemon (139.178.68.195:33338). Nov 24 07:01:26.022204 sshd[5176]: Accepted publickey for core from 139.178.68.195 port 33338 ssh2: RSA SHA256:00gGgJeMUbCrX/yVzeuyiRHqiihdx6flXVUq4OYEHGQ Nov 24 07:01:26.025756 sshd-session[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 07:01:26.036396 systemd-logind[1496]: New session 23 of user core. Nov 24 07:01:26.042032 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 24 07:01:26.208582 sshd[5179]: Connection closed by 139.178.68.195 port 33338 Nov 24 07:01:26.210949 sshd-session[5176]: pam_unix(sshd:session): session closed for user core Nov 24 07:01:26.217156 systemd[1]: sshd@22-164.90.155.191:22-139.178.68.195:33338.service: Deactivated successfully. Nov 24 07:01:26.219845 systemd[1]: session-23.scope: Deactivated successfully. Nov 24 07:01:26.223028 systemd-logind[1496]: Session 23 logged out. Waiting for processes to exit. Nov 24 07:01:26.227181 systemd-logind[1496]: Removed session 23. Nov 24 07:01:26.922849 kubelet[2708]: E1124 07:01:26.922203 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-79c5bccf9c-g8247" podUID="7e8ad9f8-a5c0-4992-b252-00ca50c053ae" Nov 24 07:01:26.925461 containerd[1532]: time="2025-11-24T07:01:26.924987662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 07:01:27.280005 containerd[1532]: time="2025-11-24T07:01:27.279799238Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 07:01:27.280890 containerd[1532]: time="2025-11-24T07:01:27.280825176Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 07:01:27.281101 containerd[1532]: time="2025-11-24T07:01:27.280856765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 07:01:27.281790 kubelet[2708]: E1124 07:01:27.281677 2708 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 07:01:27.282805 kubelet[2708]: E1124 07:01:27.281804 2708 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 07:01:27.282805 kubelet[2708]: E1124 07:01:27.282372 2708 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-589b8dfb96-w2xrv_calico-system(9a41cd22-6538-4edb-a2fc-fe53fb988efb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 07:01:27.282805 kubelet[2708]: E1124 07:01:27.282555 2708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-589b8dfb96-w2xrv" podUID="9a41cd22-6538-4edb-a2fc-fe53fb988efb"