Apr 21 10:16:41.001127 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:16:41.001149 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:16:41.001158 kernel: BIOS-provided physical RAM map: Apr 21 10:16:41.001164 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 21 10:16:41.001170 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 21 10:16:41.001178 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 21 10:16:41.001185 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 21 10:16:41.001191 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 21 10:16:41.001197 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:16:41.001203 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 21 10:16:41.001209 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 10:16:41.001215 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 21 10:16:41.001221 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 21 10:16:41.001230 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 21 10:16:41.001237 kernel: NX (Execute Disable) protection: active Apr 21 10:16:41.001244 kernel: APIC: Static calls initialized Apr 21 10:16:41.001250 kernel: SMBIOS 2.8 present. Apr 21 10:16:41.001257 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 21 10:16:41.001263 kernel: Hypervisor detected: KVM Apr 21 10:16:41.001272 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:16:41.001278 kernel: kvm-clock: using sched offset of 5618586643 cycles Apr 21 10:16:41.001284 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:16:41.001291 kernel: tsc: Detected 1999.998 MHz processor Apr 21 10:16:41.001298 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:16:41.001305 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:16:41.001311 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 21 10:16:41.001318 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 21 10:16:41.001324 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:16:41.001333 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 21 10:16:41.001340 kernel: Using GB pages for direct mapping Apr 21 10:16:41.001346 kernel: ACPI: Early table checksum verification disabled Apr 21 10:16:41.001353 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 21 10:16:41.001359 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:16:41.001365 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:16:41.001372 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:16:41.001378 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 21 10:16:41.001385 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:16:41.001394 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:16:41.001400 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:16:41.001407 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:16:41.001417 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 21 10:16:41.001424 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 21 10:16:41.001430 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 21 10:16:41.001440 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 21 10:16:41.001447 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 21 10:16:41.001453 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 21 10:16:41.001460 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 21 10:16:41.001467 kernel: No NUMA configuration found Apr 21 10:16:41.001474 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 21 10:16:41.002506 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Apr 21 10:16:41.002524 kernel: Zone ranges: Apr 21 10:16:41.002536 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:16:41.002544 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 21 10:16:41.002550 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 21 10:16:41.002557 kernel: Movable zone start for each node Apr 21 10:16:41.002564 kernel: Early memory node ranges Apr 21 10:16:41.002582 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 21 10:16:41.002589 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 21 10:16:41.002596 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 21 10:16:41.002603 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 21 10:16:41.002610 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:16:41.002620 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 21 10:16:41.002627 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 21 10:16:41.002634 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:16:41.002640 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:16:41.002647 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:16:41.002654 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:16:41.002661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:16:41.002667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:16:41.002674 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:16:41.002684 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:16:41.002691 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:16:41.002698 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:16:41.002705 kernel: TSC deadline timer available Apr 21 10:16:41.002712 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:16:41.002718 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:16:41.002725 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:16:41.002732 kernel: kvm-guest: setup PV sched yield Apr 21 10:16:41.002739 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 21 10:16:41.002748 kernel: Booting paravirtualized kernel on KVM Apr 21 10:16:41.002755 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:16:41.002762 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:16:41.002769 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:16:41.002776 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:16:41.002782 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:16:41.002789 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:16:41.002796 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:16:41.002803 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:16:41.002813 kernel: random: crng init done Apr 21 10:16:41.002820 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:16:41.002826 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:16:41.002833 kernel: Fallback order for Node 0: 0 Apr 21 10:16:41.002840 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 21 10:16:41.002847 kernel: Policy zone: Normal Apr 21 10:16:41.002853 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:16:41.002860 kernel: software IO TLB: area num 2. Apr 21 10:16:41.002869 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227300K reserved, 0K cma-reserved) Apr 21 10:16:41.002876 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:16:41.002883 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:16:41.002890 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:16:41.002897 kernel: Dynamic Preempt: voluntary Apr 21 10:16:41.002903 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:16:41.002911 kernel: rcu: RCU event tracing is enabled. Apr 21 10:16:41.002918 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:16:41.002925 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:16:41.002934 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:16:41.002941 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:16:41.002948 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:16:41.002955 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:16:41.002962 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:16:41.002968 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:16:41.002975 kernel: Console: colour VGA+ 80x25 Apr 21 10:16:41.002982 kernel: printk: console [tty0] enabled Apr 21 10:16:41.002989 kernel: printk: console [ttyS0] enabled Apr 21 10:16:41.002995 kernel: ACPI: Core revision 20230628 Apr 21 10:16:41.003005 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:16:41.003012 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:16:41.003018 kernel: x2apic enabled Apr 21 10:16:41.003033 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:16:41.003043 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:16:41.003050 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:16:41.003057 kernel: kvm-guest: setup PV IPIs Apr 21 10:16:41.003064 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:16:41.003072 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 21 10:16:41.003079 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999998) Apr 21 10:16:41.003086 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:16:41.003096 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 21 10:16:41.003103 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 21 10:16:41.003110 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:16:41.003117 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:16:41.003125 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:16:41.003134 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 21 10:16:41.003142 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 21 10:16:41.003149 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 21 10:16:41.003156 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 21 10:16:41.003164 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 21 10:16:41.003171 kernel: active return thunk: srso_alias_return_thunk Apr 21 10:16:41.003179 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 21 10:16:41.003186 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 21 10:16:41.003195 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:16:41.003203 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:16:41.003210 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:16:41.003217 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:16:41.003224 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:16:41.003231 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:16:41.003238 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 21 10:16:41.003246 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 21 10:16:41.003253 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:16:41.003263 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:16:41.003270 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:16:41.003277 kernel: landlock: Up and running. Apr 21 10:16:41.003284 kernel: SELinux: Initializing. Apr 21 10:16:41.003291 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:16:41.003298 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:16:41.003305 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 21 10:16:41.003313 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:16:41.003320 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:16:41.003330 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:16:41.003337 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 21 10:16:41.003344 kernel: ... version: 0 Apr 21 10:16:41.003351 kernel: ... bit width: 48 Apr 21 10:16:41.003358 kernel: ... generic registers: 6 Apr 21 10:16:41.003365 kernel: ... value mask: 0000ffffffffffff Apr 21 10:16:41.003372 kernel: ... max period: 00007fffffffffff Apr 21 10:16:41.003379 kernel: ... fixed-purpose events: 0 Apr 21 10:16:41.003386 kernel: ... event mask: 000000000000003f Apr 21 10:16:41.003396 kernel: signal: max sigframe size: 3376 Apr 21 10:16:41.003403 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:16:41.003411 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:16:41.003418 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:16:41.003425 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:16:41.003432 kernel: .... node #0, CPUs: #1 Apr 21 10:16:41.003439 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:16:41.003446 kernel: smpboot: Max logical packages: 1 Apr 21 10:16:41.003453 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Apr 21 10:16:41.003462 kernel: devtmpfs: initialized Apr 21 10:16:41.003470 kernel: x86/mm: Memory block size: 128MB Apr 21 10:16:41.003515 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:16:41.003523 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:16:41.003530 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:16:41.003539 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:16:41.003549 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:16:41.003557 kernel: audit: type=2000 audit(1776766599.909:1): state=initialized audit_enabled=0 res=1 Apr 21 10:16:41.003564 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:16:41.003574 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:16:41.003581 kernel: cpuidle: using governor menu Apr 21 10:16:41.003588 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:16:41.003596 kernel: dca service started, version 1.12.1 Apr 21 10:16:41.003603 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:16:41.003610 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:16:41.003617 kernel: PCI: Using configuration type 1 for base access Apr 21 10:16:41.003624 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:16:41.003632 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:16:41.003641 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:16:41.003648 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:16:41.003655 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:16:41.003662 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:16:41.003670 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:16:41.003677 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:16:41.003684 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:16:41.003691 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:16:41.003698 kernel: ACPI: Interpreter enabled Apr 21 10:16:41.003708 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:16:41.003715 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:16:41.003722 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:16:41.003730 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:16:41.003737 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:16:41.003744 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:16:41.003935 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:16:41.004161 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:16:41.004309 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:16:41.004319 kernel: PCI host bridge to bus 0000:00 Apr 21 10:16:41.004462 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:16:41.004606 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:16:41.004730 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:16:41.004853 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 21 10:16:41.004979 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:16:41.005108 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 21 10:16:41.005238 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:16:41.005397 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:16:41.005558 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:16:41.005693 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 21 10:16:41.005824 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 21 10:16:41.005970 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 21 10:16:41.006102 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:16:41.006243 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 21 10:16:41.006376 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 21 10:16:41.006538 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 21 10:16:41.006676 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 21 10:16:41.006818 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:16:41.006955 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 21 10:16:41.007086 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 21 10:16:41.007217 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 21 10:16:41.007347 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 21 10:16:41.008594 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:16:41.008732 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:16:41.008869 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:16:41.009002 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 21 10:16:41.009127 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 21 10:16:41.009260 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:16:41.009386 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 21 10:16:41.009396 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:16:41.009404 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:16:41.009411 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:16:41.009422 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:16:41.009430 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:16:41.009437 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:16:41.009444 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:16:41.009451 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:16:41.009458 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:16:41.009466 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:16:41.009473 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:16:41.010565 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:16:41.010583 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:16:41.010591 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:16:41.010598 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:16:41.010606 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:16:41.010613 kernel: iommu: Default domain type: Translated Apr 21 10:16:41.010620 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:16:41.010627 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:16:41.010634 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:16:41.010641 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 21 10:16:41.010651 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 21 10:16:41.010818 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:16:41.010949 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:16:41.011072 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:16:41.011082 kernel: vgaarb: loaded Apr 21 10:16:41.011090 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:16:41.011098 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:16:41.011105 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:16:41.011117 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:16:41.011124 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:16:41.011131 kernel: pnp: PnP ACPI init Apr 21 10:16:41.011272 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:16:41.011282 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:16:41.011290 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:16:41.011297 kernel: NET: Registered PF_INET protocol family Apr 21 10:16:41.011305 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:16:41.011312 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:16:41.011323 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:16:41.011330 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:16:41.011338 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:16:41.011345 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:16:41.011352 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:16:41.011359 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:16:41.011367 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:16:41.011374 kernel: NET: Registered PF_XDP protocol family Apr 21 10:16:41.011539 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:16:41.011661 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:16:41.011818 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:16:41.011937 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 21 10:16:41.012057 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:16:41.012171 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 21 10:16:41.012180 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:16:41.012188 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 21 10:16:41.012195 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 21 10:16:41.012208 kernel: Initialise system trusted keyrings Apr 21 10:16:41.012215 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:16:41.012222 kernel: Key type asymmetric registered Apr 21 10:16:41.012229 kernel: Asymmetric key parser 'x509' registered Apr 21 10:16:41.012237 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:16:41.012244 kernel: io scheduler mq-deadline registered Apr 21 10:16:41.012251 kernel: io scheduler kyber registered Apr 21 10:16:41.012258 kernel: io scheduler bfq registered Apr 21 10:16:41.012265 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:16:41.012276 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:16:41.012283 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:16:41.012290 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:16:41.012298 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:16:41.012305 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:16:41.012313 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:16:41.012320 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:16:41.012451 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 21 10:16:41.012465 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:16:41.014616 kernel: rtc_cmos 00:03: registered as rtc0 Apr 21 10:16:41.014778 kernel: rtc_cmos 00:03: setting system clock to 2026-04-21T10:16:40 UTC (1776766600) Apr 21 10:16:41.014900 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 10:16:41.014910 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 21 10:16:41.014917 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:16:41.014924 kernel: Segment Routing with IPv6 Apr 21 10:16:41.014931 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:16:41.014938 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:16:41.014950 kernel: Key type dns_resolver registered Apr 21 10:16:41.014957 kernel: IPI shorthand broadcast: enabled Apr 21 10:16:41.014965 kernel: sched_clock: Marking stable (883004469, 328447229)->(1338569138, -127117440) Apr 21 10:16:41.014972 kernel: registered taskstats version 1 Apr 21 10:16:41.014979 kernel: Loading compiled-in X.509 certificates Apr 21 10:16:41.014986 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:16:41.014993 kernel: Key type .fscrypt registered Apr 21 10:16:41.015000 kernel: Key type fscrypt-provisioning registered Apr 21 10:16:41.015007 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:16:41.015017 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:16:41.015024 kernel: ima: No architecture policies found Apr 21 10:16:41.015031 kernel: clk: Disabling unused clocks Apr 21 10:16:41.015041 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:16:41.015053 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:16:41.015062 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:16:41.015069 kernel: Run /init as init process Apr 21 10:16:41.015076 kernel: with arguments: Apr 21 10:16:41.015086 kernel: /init Apr 21 10:16:41.015093 kernel: with environment: Apr 21 10:16:41.015100 kernel: HOME=/ Apr 21 10:16:41.015107 kernel: TERM=linux Apr 21 10:16:41.015116 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:16:41.015125 systemd[1]: Detected virtualization kvm. Apr 21 10:16:41.015132 systemd[1]: Detected architecture x86-64. Apr 21 10:16:41.015140 systemd[1]: Running in initrd. Apr 21 10:16:41.015149 systemd[1]: No hostname configured, using default hostname. Apr 21 10:16:41.015157 systemd[1]: Hostname set to . Apr 21 10:16:41.015164 systemd[1]: Initializing machine ID from random generator. Apr 21 10:16:41.015172 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:16:41.015179 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:16:41.015201 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:16:41.015215 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:16:41.015222 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:16:41.015230 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:16:41.015238 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:16:41.015247 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:16:41.015255 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:16:41.015263 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:16:41.015273 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:16:41.015280 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:16:41.015288 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:16:41.015296 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:16:41.015304 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:16:41.015311 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:16:41.015319 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:16:41.015327 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:16:41.015337 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:16:41.015345 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:16:41.015352 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:16:41.015360 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:16:41.015367 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:16:41.015375 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:16:41.015383 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:16:41.015390 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:16:41.015398 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:16:41.015408 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:16:41.015416 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:16:41.015423 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:16:41.015451 systemd-journald[178]: Collecting audit messages is disabled. Apr 21 10:16:41.015471 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:16:41.016169 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:16:41.016179 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:16:41.016192 systemd-journald[178]: Journal started Apr 21 10:16:41.016208 systemd-journald[178]: Runtime Journal (/run/log/journal/98bc23b5e7bd4259a5123da0b002a15c) is 8.0M, max 78.3M, 70.3M free. Apr 21 10:16:41.003006 systemd-modules-load[179]: Inserted module 'overlay' Apr 21 10:16:41.108576 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:16:41.108614 kernel: Bridge firewalling registered Apr 21 10:16:41.034981 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 21 10:16:41.112805 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:16:41.113910 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:16:41.114931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:41.122607 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:16:41.125161 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:16:41.127614 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:16:41.138781 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:16:41.142416 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:16:41.157969 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:16:41.171163 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:16:41.172390 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:16:41.182728 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:16:41.185974 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:16:41.188614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:16:41.202532 dracut-cmdline[208]: dracut-dracut-053 Apr 21 10:16:41.210215 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:16:41.213086 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:16:41.237629 systemd-resolved[210]: Positive Trust Anchors: Apr 21 10:16:41.237643 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:16:41.237672 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:16:41.244880 systemd-resolved[210]: Defaulting to hostname 'linux'. Apr 21 10:16:41.246092 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:16:41.247377 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:16:41.292521 kernel: SCSI subsystem initialized Apr 21 10:16:41.302502 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:16:41.313526 kernel: iscsi: registered transport (tcp) Apr 21 10:16:41.334679 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:16:41.334775 kernel: QLogic iSCSI HBA Driver Apr 21 10:16:41.378477 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:16:41.387663 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:16:41.412618 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:16:41.412666 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:16:41.414801 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:16:41.457526 kernel: raid6: avx2x4 gen() 34435 MB/s Apr 21 10:16:41.475508 kernel: raid6: avx2x2 gen() 30654 MB/s Apr 21 10:16:41.493804 kernel: raid6: avx2x1 gen() 24921 MB/s Apr 21 10:16:41.493839 kernel: raid6: using algorithm avx2x4 gen() 34435 MB/s Apr 21 10:16:41.516539 kernel: raid6: .... xor() 4316 MB/s, rmw enabled Apr 21 10:16:41.516567 kernel: raid6: using avx2x2 recovery algorithm Apr 21 10:16:41.537519 kernel: xor: automatically using best checksumming function avx Apr 21 10:16:41.676527 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:16:41.689162 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:16:41.697646 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:16:41.711700 systemd-udevd[396]: Using default interface naming scheme 'v255'. Apr 21 10:16:41.716419 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:16:41.723671 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:16:41.737392 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 21 10:16:41.768206 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:16:41.775620 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:16:41.846945 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:16:41.855642 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:16:41.871718 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:16:41.875007 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:16:41.876685 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:16:41.879066 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:16:41.885673 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:16:41.903169 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:16:41.927553 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:16:41.938287 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:16:41.938365 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:16:42.124920 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:16:42.126181 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:16:42.126271 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:42.127049 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:16:42.174445 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:16:42.188936 kernel: scsi host0: Virtio SCSI HBA Apr 21 10:16:42.191974 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 21 10:16:42.192016 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:16:42.192028 kernel: AES CTR mode by8 optimization enabled Apr 21 10:16:42.205452 kernel: libata version 3.00 loaded. Apr 21 10:16:42.223594 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:16:42.223840 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:16:42.226382 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:16:42.226602 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:16:42.235514 kernel: scsi host1: ahci Apr 21 10:16:42.236537 kernel: scsi host2: ahci Apr 21 10:16:42.239575 kernel: scsi host3: ahci Apr 21 10:16:42.239806 kernel: scsi host4: ahci Apr 21 10:16:42.242584 kernel: scsi host5: ahci Apr 21 10:16:42.244503 kernel: scsi host6: ahci Apr 21 10:16:42.244729 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 21 10:16:42.244744 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 21 10:16:42.244754 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 21 10:16:42.244764 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 21 10:16:42.244783 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 21 10:16:42.244793 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 21 10:16:42.249652 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 21 10:16:42.250384 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 21 10:16:42.250777 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 10:16:42.251017 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 21 10:16:42.252606 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 21 10:16:42.252772 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:16:42.252793 kernel: GPT:9289727 != 167739391 Apr 21 10:16:42.252803 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:16:42.252813 kernel: GPT:9289727 != 167739391 Apr 21 10:16:42.252822 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:16:42.252832 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:16:42.252842 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 10:16:42.380447 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:42.387761 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:16:42.424114 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:16:42.553534 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 21 10:16:42.553634 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:16:42.563361 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:16:42.563511 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:16:42.566518 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:16:42.566575 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:16:42.609510 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (452) Apr 21 10:16:42.619502 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (441) Apr 21 10:16:42.623135 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 21 10:16:42.629525 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 21 10:16:42.635417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:16:42.640775 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 21 10:16:42.641704 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 21 10:16:42.648938 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:16:42.658512 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:16:42.658693 disk-uuid[566]: Primary Header is updated. Apr 21 10:16:42.658693 disk-uuid[566]: Secondary Entries is updated. Apr 21 10:16:42.658693 disk-uuid[566]: Secondary Header is updated. Apr 21 10:16:43.676533 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:16:43.678331 disk-uuid[567]: The operation has completed successfully. Apr 21 10:16:43.740935 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:16:43.741101 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:16:43.744782 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:16:43.752663 sh[584]: Success Apr 21 10:16:43.769533 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:16:43.825250 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:16:43.834629 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:16:43.836560 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:16:43.867033 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:16:43.867082 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:16:43.869528 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:16:43.873643 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:16:43.878360 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:16:43.887532 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:16:43.890379 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:16:43.892230 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:16:43.902771 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:16:43.905640 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:16:43.928631 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:43.928700 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:16:43.928721 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:16:43.936829 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:16:43.936865 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:16:43.953825 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:43.953538 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:16:43.961846 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:16:43.967674 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:16:44.052027 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:16:44.059648 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:16:44.059990 ignition[697]: Ignition 2.19.0 Apr 21 10:16:44.060002 ignition[697]: Stage: fetch-offline Apr 21 10:16:44.060072 ignition[697]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:44.060101 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:16:44.066849 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:16:44.060248 ignition[697]: parsed url from cmdline: "" Apr 21 10:16:44.060253 ignition[697]: no config URL provided Apr 21 10:16:44.060259 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:16:44.060270 ignition[697]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:16:44.060278 ignition[697]: failed to fetch config: resource requires networking Apr 21 10:16:44.061907 ignition[697]: Ignition finished successfully Apr 21 10:16:44.096435 systemd-networkd[769]: lo: Link UP Apr 21 10:16:44.096451 systemd-networkd[769]: lo: Gained carrier Apr 21 10:16:44.098312 systemd-networkd[769]: Enumeration completed Apr 21 10:16:44.098442 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:16:44.099275 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:16:44.099280 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:16:44.100069 systemd[1]: Reached target network.target - Network. Apr 21 10:16:44.101454 systemd-networkd[769]: eth0: Link UP Apr 21 10:16:44.101460 systemd-networkd[769]: eth0: Gained carrier Apr 21 10:16:44.101468 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:16:44.111745 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:16:44.128627 ignition[773]: Ignition 2.19.0 Apr 21 10:16:44.128642 ignition[773]: Stage: fetch Apr 21 10:16:44.128814 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:44.128827 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:16:44.128916 ignition[773]: parsed url from cmdline: "" Apr 21 10:16:44.128921 ignition[773]: no config URL provided Apr 21 10:16:44.128926 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:16:44.128936 ignition[773]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:16:44.128958 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 21 10:16:44.129150 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:16:44.329250 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 21 10:16:44.329459 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:16:44.730034 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 21 10:16:44.730192 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:16:44.884600 systemd-networkd[769]: eth0: DHCPv4 address 172.236.109.217/24, gateway 172.236.109.1 acquired from 23.205.167.221 Apr 21 10:16:45.530380 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 21 10:16:45.626504 ignition[773]: PUT result: OK Apr 21 10:16:45.626587 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 21 10:16:45.737017 ignition[773]: GET result: OK Apr 21 10:16:45.737187 ignition[773]: parsing config with SHA512: 412b4a153299512a51d2e7cf9b912a12482b5054bdc45155fc8d8749db028ffa74c205c1eaccf7a0c60fae82d020295e731c965a6a82b2136cdd523731644d8b Apr 21 10:16:45.743103 unknown[773]: fetched base config from "system" Apr 21 10:16:45.743123 unknown[773]: fetched base config from "system" Apr 21 10:16:45.744903 ignition[773]: fetch: fetch complete Apr 21 10:16:45.743139 unknown[773]: fetched user config from "akamai" Apr 21 10:16:45.744928 ignition[773]: fetch: fetch passed Apr 21 10:16:45.745006 ignition[773]: Ignition finished successfully Apr 21 10:16:45.748529 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:16:45.760645 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:16:45.777551 ignition[781]: Ignition 2.19.0 Apr 21 10:16:45.777570 ignition[781]: Stage: kargs Apr 21 10:16:45.777787 ignition[781]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:45.777804 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:16:45.779413 ignition[781]: kargs: kargs passed Apr 21 10:16:45.781539 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:16:45.779511 ignition[781]: Ignition finished successfully Apr 21 10:16:45.789636 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:16:45.804164 ignition[787]: Ignition 2.19.0 Apr 21 10:16:45.804181 ignition[787]: Stage: disks Apr 21 10:16:45.804416 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:45.808283 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:16:45.804432 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:16:45.831097 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:16:45.806116 ignition[787]: disks: disks passed Apr 21 10:16:45.832384 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:16:45.806196 ignition[787]: Ignition finished successfully Apr 21 10:16:45.834179 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:16:45.835916 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:16:45.837409 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:16:45.845707 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:16:45.863365 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:16:45.866999 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:16:45.873602 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:16:45.976559 kernel: EXT4-fs (sda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:16:45.977664 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:16:45.979144 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:16:45.986612 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:16:45.990370 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:16:45.991967 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:16:45.992143 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:16:45.992324 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:16:46.004516 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (803) Apr 21 10:16:46.008614 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:16:46.015414 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:46.015456 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:16:46.015475 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:16:46.023513 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:16:46.023558 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:16:46.026665 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:16:46.030265 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:16:46.078868 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:16:46.083386 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:16:46.089469 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:16:46.093942 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:16:46.126984 systemd-networkd[769]: eth0: Gained IPv6LL Apr 21 10:16:46.193340 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:16:46.199570 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:16:46.202301 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:16:46.209329 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:16:46.214186 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:46.239014 ignition[921]: INFO : Ignition 2.19.0 Apr 21 10:16:46.241576 ignition[921]: INFO : Stage: mount Apr 21 10:16:46.241576 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:46.241576 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:16:46.241576 ignition[921]: INFO : mount: mount passed Apr 21 10:16:46.241576 ignition[921]: INFO : Ignition finished successfully Apr 21 10:16:46.242908 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:16:46.244192 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:16:46.251601 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:16:46.982626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:16:47.000516 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (934) Apr 21 10:16:47.007842 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:47.007874 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:16:47.007892 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:16:47.017310 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:16:47.017344 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:16:47.020543 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:16:47.055372 ignition[951]: INFO : Ignition 2.19.0 Apr 21 10:16:47.055372 ignition[951]: INFO : Stage: files Apr 21 10:16:47.055372 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:47.058167 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:16:47.058167 ignition[951]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:16:47.062269 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:16:47.062269 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:16:47.065082 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:16:47.066346 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:16:47.066346 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:16:47.066214 unknown[951]: wrote ssh authorized keys file for user: core Apr 21 10:16:47.069784 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 10:16:47.069784 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 10:16:47.069784 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:16:47.069784 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:16:47.356778 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 10:16:47.476926 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:16:47.489716 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:16:47.489716 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:16:47.489716 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:16:47.489716 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 10:16:47.785708 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 10:16:48.032655 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:16:48.032655 ignition[951]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:16:48.058006 ignition[951]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:16:48.058006 ignition[951]: INFO : files: files passed Apr 21 10:16:48.058006 ignition[951]: INFO : Ignition finished successfully Apr 21 10:16:48.041997 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:16:48.066668 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:16:48.071655 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:16:48.073236 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:16:48.073358 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:16:48.092832 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:16:48.094573 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:16:48.094573 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:16:48.096206 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:16:48.097714 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:16:48.103730 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:16:48.129985 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:16:48.130119 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:16:48.132346 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:16:48.133537 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:16:48.135295 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:16:48.140729 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:16:48.155433 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:16:48.163643 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:16:48.173869 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:16:48.174846 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:16:48.176600 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:16:48.178143 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:16:48.178294 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:16:48.180308 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:16:48.181380 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:16:48.182972 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:16:48.184425 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:16:48.185877 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:16:48.187501 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:16:48.189108 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:16:48.190816 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:16:48.192347 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:16:48.194020 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:16:48.195543 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:16:48.195669 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:16:48.197401 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:16:48.198456 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:16:48.199920 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:16:48.200030 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:16:48.201566 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:16:48.201666 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:16:48.203813 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:16:48.203923 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:16:48.204930 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:16:48.205028 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:16:48.216643 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:16:48.217679 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:16:48.217795 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:16:48.222700 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:16:48.223663 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:16:48.223780 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:16:48.227132 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:16:48.227232 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:16:48.235841 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:16:48.236530 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:16:48.240533 ignition[1004]: INFO : Ignition 2.19.0 Apr 21 10:16:48.240533 ignition[1004]: INFO : Stage: umount Apr 21 10:16:48.240533 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:48.240533 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:16:48.246537 ignition[1004]: INFO : umount: umount passed Apr 21 10:16:48.246537 ignition[1004]: INFO : Ignition finished successfully Apr 21 10:16:48.247991 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:16:48.248118 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:16:48.275123 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:16:48.275556 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:16:48.275608 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:16:48.277928 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:16:48.277980 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:16:48.279380 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:16:48.279442 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:16:48.280858 systemd[1]: Stopped target network.target - Network. Apr 21 10:16:48.282384 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:16:48.282441 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:16:48.287770 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:16:48.289155 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:16:48.290557 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:16:48.291466 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:16:48.293124 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:16:48.294839 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:16:48.294889 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:16:48.297009 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:16:48.297053 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:16:48.298626 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:16:48.298685 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:16:48.301241 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:16:48.301310 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:16:48.305436 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:16:48.307697 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:16:48.309437 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:16:48.309623 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:16:48.309662 systemd-networkd[769]: eth0: DHCPv6 lease lost Apr 21 10:16:48.311840 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:16:48.311966 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:16:48.317209 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:16:48.317282 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:16:48.320754 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:16:48.320809 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:16:48.332668 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:16:48.333407 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:16:48.333504 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:16:48.335308 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:16:48.339958 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:16:48.340089 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:16:48.349820 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:16:48.350023 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:16:48.358792 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:16:48.358872 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:16:48.360575 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:16:48.360618 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:16:48.362332 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:16:48.362387 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:16:48.365051 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:16:48.365120 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:16:48.366779 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:16:48.366851 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:16:48.375733 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:16:48.377521 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:16:48.378366 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:16:48.379189 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:16:48.379245 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:16:48.381719 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:16:48.381774 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:16:48.382742 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:16:48.382794 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:16:48.384534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:16:48.384592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:48.386727 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:16:48.386843 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:16:48.388326 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:16:48.388430 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:16:48.390373 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:16:48.398660 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:16:48.407644 systemd[1]: Switching root. Apr 21 10:16:48.433958 systemd-journald[178]: Journal stopped Apr 21 10:16:41.001127 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:16:41.001149 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:16:41.001158 kernel: BIOS-provided physical RAM map: Apr 21 10:16:41.001164 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 21 10:16:41.001170 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 21 10:16:41.001178 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 21 10:16:41.001185 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 21 10:16:41.001191 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 21 10:16:41.001197 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 10:16:41.001203 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 21 10:16:41.001209 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 10:16:41.001215 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 21 10:16:41.001221 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 21 10:16:41.001230 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 21 10:16:41.001237 kernel: NX (Execute Disable) protection: active Apr 21 10:16:41.001244 kernel: APIC: Static calls initialized Apr 21 10:16:41.001250 kernel: SMBIOS 2.8 present. Apr 21 10:16:41.001257 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 21 10:16:41.001263 kernel: Hypervisor detected: KVM Apr 21 10:16:41.001272 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:16:41.001278 kernel: kvm-clock: using sched offset of 5618586643 cycles Apr 21 10:16:41.001284 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:16:41.001291 kernel: tsc: Detected 1999.998 MHz processor Apr 21 10:16:41.001298 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:16:41.001305 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:16:41.001311 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 21 10:16:41.001318 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 21 10:16:41.001324 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:16:41.001333 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 21 10:16:41.001340 kernel: Using GB pages for direct mapping Apr 21 10:16:41.001346 kernel: ACPI: Early table checksum verification disabled Apr 21 10:16:41.001353 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 21 10:16:41.001359 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:16:41.001365 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:16:41.001372 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:16:41.001378 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 21 10:16:41.001385 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:16:41.001394 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:16:41.001400 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:16:41.001407 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:16:41.001417 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 21 10:16:41.001424 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 21 10:16:41.001430 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 21 10:16:41.001440 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 21 10:16:41.001447 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 21 10:16:41.001453 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 21 10:16:41.001460 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 21 10:16:41.001467 kernel: No NUMA configuration found Apr 21 10:16:41.001474 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 21 10:16:41.002506 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Apr 21 10:16:41.002524 kernel: Zone ranges: Apr 21 10:16:41.002536 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:16:41.002544 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 21 10:16:41.002550 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 21 10:16:41.002557 kernel: Movable zone start for each node Apr 21 10:16:41.002564 kernel: Early memory node ranges Apr 21 10:16:41.002582 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 21 10:16:41.002589 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 21 10:16:41.002596 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 21 10:16:41.002603 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 21 10:16:41.002610 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:16:41.002620 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 21 10:16:41.002627 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 21 10:16:41.002634 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:16:41.002640 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:16:41.002647 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:16:41.002654 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:16:41.002661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:16:41.002667 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:16:41.002674 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:16:41.002684 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:16:41.002691 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:16:41.002698 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:16:41.002705 kernel: TSC deadline timer available Apr 21 10:16:41.002712 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:16:41.002718 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:16:41.002725 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 10:16:41.002732 kernel: kvm-guest: setup PV sched yield Apr 21 10:16:41.002739 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 21 10:16:41.002748 kernel: Booting paravirtualized kernel on KVM Apr 21 10:16:41.002755 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:16:41.002762 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:16:41.002769 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:16:41.002776 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:16:41.002782 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:16:41.002789 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:16:41.002796 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:16:41.002803 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:16:41.002813 kernel: random: crng init done Apr 21 10:16:41.002820 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:16:41.002826 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:16:41.002833 kernel: Fallback order for Node 0: 0 Apr 21 10:16:41.002840 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Apr 21 10:16:41.002847 kernel: Policy zone: Normal Apr 21 10:16:41.002853 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:16:41.002860 kernel: software IO TLB: area num 2. Apr 21 10:16:41.002869 kernel: Memory: 3966212K/4193772K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 227300K reserved, 0K cma-reserved) Apr 21 10:16:41.002876 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:16:41.002883 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:16:41.002890 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:16:41.002897 kernel: Dynamic Preempt: voluntary Apr 21 10:16:41.002903 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:16:41.002911 kernel: rcu: RCU event tracing is enabled. Apr 21 10:16:41.002918 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:16:41.002925 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:16:41.002934 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:16:41.002941 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:16:41.002948 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:16:41.002955 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:16:41.002962 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:16:41.002968 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:16:41.002975 kernel: Console: colour VGA+ 80x25 Apr 21 10:16:41.002982 kernel: printk: console [tty0] enabled Apr 21 10:16:41.002989 kernel: printk: console [ttyS0] enabled Apr 21 10:16:41.002995 kernel: ACPI: Core revision 20230628 Apr 21 10:16:41.003005 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:16:41.003012 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:16:41.003018 kernel: x2apic enabled Apr 21 10:16:41.003033 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:16:41.003043 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 10:16:41.003050 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 10:16:41.003057 kernel: kvm-guest: setup PV IPIs Apr 21 10:16:41.003064 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:16:41.003072 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 21 10:16:41.003079 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999998) Apr 21 10:16:41.003086 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:16:41.003096 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 21 10:16:41.003103 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 21 10:16:41.003110 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:16:41.003117 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:16:41.003125 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:16:41.003134 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 21 10:16:41.003142 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 21 10:16:41.003149 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 21 10:16:41.003156 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 21 10:16:41.003164 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 21 10:16:41.003171 kernel: active return thunk: srso_alias_return_thunk Apr 21 10:16:41.003179 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 21 10:16:41.003186 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 21 10:16:41.003195 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:16:41.003203 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:16:41.003210 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:16:41.003217 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:16:41.003224 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:16:41.003231 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:16:41.003238 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 21 10:16:41.003246 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 21 10:16:41.003253 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:16:41.003263 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:16:41.003270 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:16:41.003277 kernel: landlock: Up and running. Apr 21 10:16:41.003284 kernel: SELinux: Initializing. Apr 21 10:16:41.003291 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:16:41.003298 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:16:41.003305 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 21 10:16:41.003313 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:16:41.003320 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:16:41.003330 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:16:41.003337 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 21 10:16:41.003344 kernel: ... version: 0 Apr 21 10:16:41.003351 kernel: ... bit width: 48 Apr 21 10:16:41.003358 kernel: ... generic registers: 6 Apr 21 10:16:41.003365 kernel: ... value mask: 0000ffffffffffff Apr 21 10:16:41.003372 kernel: ... max period: 00007fffffffffff Apr 21 10:16:41.003379 kernel: ... fixed-purpose events: 0 Apr 21 10:16:41.003386 kernel: ... event mask: 000000000000003f Apr 21 10:16:41.003396 kernel: signal: max sigframe size: 3376 Apr 21 10:16:41.003403 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:16:41.003411 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:16:41.003418 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:16:41.003425 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:16:41.003432 kernel: .... node #0, CPUs: #1 Apr 21 10:16:41.003439 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:16:41.003446 kernel: smpboot: Max logical packages: 1 Apr 21 10:16:41.003453 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Apr 21 10:16:41.003462 kernel: devtmpfs: initialized Apr 21 10:16:41.003470 kernel: x86/mm: Memory block size: 128MB Apr 21 10:16:41.003515 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:16:41.003523 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:16:41.003530 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:16:41.003539 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:16:41.003549 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:16:41.003557 kernel: audit: type=2000 audit(1776766599.909:1): state=initialized audit_enabled=0 res=1 Apr 21 10:16:41.003564 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:16:41.003574 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:16:41.003581 kernel: cpuidle: using governor menu Apr 21 10:16:41.003588 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:16:41.003596 kernel: dca service started, version 1.12.1 Apr 21 10:16:41.003603 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 21 10:16:41.003610 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 10:16:41.003617 kernel: PCI: Using configuration type 1 for base access Apr 21 10:16:41.003624 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:16:41.003632 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:16:41.003641 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:16:41.003648 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:16:41.003655 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:16:41.003662 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:16:41.003670 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:16:41.003677 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:16:41.003684 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:16:41.003691 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:16:41.003698 kernel: ACPI: Interpreter enabled Apr 21 10:16:41.003708 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 10:16:41.003715 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:16:41.003722 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:16:41.003730 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:16:41.003737 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:16:41.003744 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:16:41.003935 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:16:41.004161 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:16:41.004309 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:16:41.004319 kernel: PCI host bridge to bus 0000:00 Apr 21 10:16:41.004462 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:16:41.004606 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:16:41.004730 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:16:41.004853 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 21 10:16:41.004979 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 10:16:41.005108 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 21 10:16:41.005238 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:16:41.005397 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:16:41.005558 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 21 10:16:41.005693 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 21 10:16:41.005824 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 21 10:16:41.005970 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 21 10:16:41.006102 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:16:41.006243 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Apr 21 10:16:41.006376 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Apr 21 10:16:41.006538 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 21 10:16:41.006676 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 21 10:16:41.006818 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 21 10:16:41.006955 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 21 10:16:41.007086 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 21 10:16:41.007217 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 21 10:16:41.007347 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 21 10:16:41.008594 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:16:41.008732 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:16:41.008869 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:16:41.009002 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Apr 21 10:16:41.009127 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Apr 21 10:16:41.009260 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:16:41.009386 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 21 10:16:41.009396 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:16:41.009404 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:16:41.009411 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:16:41.009422 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:16:41.009430 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:16:41.009437 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:16:41.009444 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:16:41.009451 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:16:41.009458 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:16:41.009466 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:16:41.009473 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:16:41.010565 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:16:41.010583 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:16:41.010591 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:16:41.010598 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:16:41.010606 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:16:41.010613 kernel: iommu: Default domain type: Translated Apr 21 10:16:41.010620 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:16:41.010627 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:16:41.010634 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:16:41.010641 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 21 10:16:41.010651 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 21 10:16:41.010818 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:16:41.010949 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:16:41.011072 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:16:41.011082 kernel: vgaarb: loaded Apr 21 10:16:41.011090 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:16:41.011098 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:16:41.011105 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:16:41.011117 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:16:41.011124 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:16:41.011131 kernel: pnp: PnP ACPI init Apr 21 10:16:41.011272 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 10:16:41.011282 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:16:41.011290 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:16:41.011297 kernel: NET: Registered PF_INET protocol family Apr 21 10:16:41.011305 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:16:41.011312 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:16:41.011323 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:16:41.011330 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:16:41.011338 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:16:41.011345 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:16:41.011352 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:16:41.011359 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:16:41.011367 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:16:41.011374 kernel: NET: Registered PF_XDP protocol family Apr 21 10:16:41.011539 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:16:41.011661 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:16:41.011818 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:16:41.011937 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 21 10:16:41.012057 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 10:16:41.012171 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 21 10:16:41.012180 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:16:41.012188 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 21 10:16:41.012195 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 21 10:16:41.012208 kernel: Initialise system trusted keyrings Apr 21 10:16:41.012215 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:16:41.012222 kernel: Key type asymmetric registered Apr 21 10:16:41.012229 kernel: Asymmetric key parser 'x509' registered Apr 21 10:16:41.012237 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:16:41.012244 kernel: io scheduler mq-deadline registered Apr 21 10:16:41.012251 kernel: io scheduler kyber registered Apr 21 10:16:41.012258 kernel: io scheduler bfq registered Apr 21 10:16:41.012265 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:16:41.012276 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:16:41.012283 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:16:41.012290 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:16:41.012298 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:16:41.012305 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:16:41.012313 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:16:41.012320 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:16:41.012451 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 21 10:16:41.012465 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:16:41.014616 kernel: rtc_cmos 00:03: registered as rtc0 Apr 21 10:16:41.014778 kernel: rtc_cmos 00:03: setting system clock to 2026-04-21T10:16:40 UTC (1776766600) Apr 21 10:16:41.014900 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 10:16:41.014910 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 21 10:16:41.014917 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:16:41.014924 kernel: Segment Routing with IPv6 Apr 21 10:16:41.014931 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:16:41.014938 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:16:41.014950 kernel: Key type dns_resolver registered Apr 21 10:16:41.014957 kernel: IPI shorthand broadcast: enabled Apr 21 10:16:41.014965 kernel: sched_clock: Marking stable (883004469, 328447229)->(1338569138, -127117440) Apr 21 10:16:41.014972 kernel: registered taskstats version 1 Apr 21 10:16:41.014979 kernel: Loading compiled-in X.509 certificates Apr 21 10:16:41.014986 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:16:41.014993 kernel: Key type .fscrypt registered Apr 21 10:16:41.015000 kernel: Key type fscrypt-provisioning registered Apr 21 10:16:41.015007 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:16:41.015017 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:16:41.015024 kernel: ima: No architecture policies found Apr 21 10:16:41.015031 kernel: clk: Disabling unused clocks Apr 21 10:16:41.015041 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:16:41.015053 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:16:41.015062 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:16:41.015069 kernel: Run /init as init process Apr 21 10:16:41.015076 kernel: with arguments: Apr 21 10:16:41.015086 kernel: /init Apr 21 10:16:41.015093 kernel: with environment: Apr 21 10:16:41.015100 kernel: HOME=/ Apr 21 10:16:41.015107 kernel: TERM=linux Apr 21 10:16:41.015116 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:16:41.015125 systemd[1]: Detected virtualization kvm. Apr 21 10:16:41.015132 systemd[1]: Detected architecture x86-64. Apr 21 10:16:41.015140 systemd[1]: Running in initrd. Apr 21 10:16:41.015149 systemd[1]: No hostname configured, using default hostname. Apr 21 10:16:41.015157 systemd[1]: Hostname set to . Apr 21 10:16:41.015164 systemd[1]: Initializing machine ID from random generator. Apr 21 10:16:41.015172 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:16:41.015179 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:16:41.015201 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:16:41.015215 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:16:41.015222 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:16:41.015230 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:16:41.015238 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:16:41.015247 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:16:41.015255 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:16:41.015263 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:16:41.015273 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:16:41.015280 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:16:41.015288 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:16:41.015296 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:16:41.015304 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:16:41.015311 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:16:41.015319 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:16:41.015327 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:16:41.015337 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:16:41.015345 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:16:41.015352 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:16:41.015360 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:16:41.015367 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:16:41.015375 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:16:41.015383 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:16:41.015390 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:16:41.015398 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:16:41.015408 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:16:41.015416 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:16:41.015423 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:16:41.015451 systemd-journald[178]: Collecting audit messages is disabled. Apr 21 10:16:41.015471 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:16:41.016169 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:16:41.016179 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:16:41.016192 systemd-journald[178]: Journal started Apr 21 10:16:41.016208 systemd-journald[178]: Runtime Journal (/run/log/journal/98bc23b5e7bd4259a5123da0b002a15c) is 8.0M, max 78.3M, 70.3M free. Apr 21 10:16:41.003006 systemd-modules-load[179]: Inserted module 'overlay' Apr 21 10:16:41.108576 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:16:41.108614 kernel: Bridge firewalling registered Apr 21 10:16:41.034981 systemd-modules-load[179]: Inserted module 'br_netfilter' Apr 21 10:16:41.112805 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:16:41.113910 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:16:41.114931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:41.122607 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:16:41.125161 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:16:41.127614 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:16:41.138781 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:16:41.142416 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:16:41.157969 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:16:41.171163 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:16:41.172390 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:16:41.182728 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:16:41.185974 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:16:41.188614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:16:41.202532 dracut-cmdline[208]: dracut-dracut-053 Apr 21 10:16:41.210215 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:16:41.213086 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:16:41.237629 systemd-resolved[210]: Positive Trust Anchors: Apr 21 10:16:41.237643 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:16:41.237672 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:16:41.244880 systemd-resolved[210]: Defaulting to hostname 'linux'. Apr 21 10:16:41.246092 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:16:41.247377 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:16:41.292521 kernel: SCSI subsystem initialized Apr 21 10:16:41.302502 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:16:41.313526 kernel: iscsi: registered transport (tcp) Apr 21 10:16:41.334679 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:16:41.334775 kernel: QLogic iSCSI HBA Driver Apr 21 10:16:41.378477 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:16:41.387663 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:16:41.412618 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:16:41.412666 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:16:41.414801 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:16:41.457526 kernel: raid6: avx2x4 gen() 34435 MB/s Apr 21 10:16:41.475508 kernel: raid6: avx2x2 gen() 30654 MB/s Apr 21 10:16:41.493804 kernel: raid6: avx2x1 gen() 24921 MB/s Apr 21 10:16:41.493839 kernel: raid6: using algorithm avx2x4 gen() 34435 MB/s Apr 21 10:16:41.516539 kernel: raid6: .... xor() 4316 MB/s, rmw enabled Apr 21 10:16:41.516567 kernel: raid6: using avx2x2 recovery algorithm Apr 21 10:16:41.537519 kernel: xor: automatically using best checksumming function avx Apr 21 10:16:41.676527 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:16:41.689162 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:16:41.697646 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:16:41.711700 systemd-udevd[396]: Using default interface naming scheme 'v255'. Apr 21 10:16:41.716419 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:16:41.723671 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:16:41.737392 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 21 10:16:41.768206 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:16:41.775620 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:16:41.846945 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:16:41.855642 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:16:41.871718 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:16:41.875007 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:16:41.876685 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:16:41.879066 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:16:41.885673 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:16:41.903169 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:16:41.927553 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:16:41.938287 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:16:41.938365 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:16:42.124920 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:16:42.126181 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:16:42.126271 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:42.127049 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:16:42.174445 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:16:42.188936 kernel: scsi host0: Virtio SCSI HBA Apr 21 10:16:42.191974 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 21 10:16:42.192016 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:16:42.192028 kernel: AES CTR mode by8 optimization enabled Apr 21 10:16:42.205452 kernel: libata version 3.00 loaded. Apr 21 10:16:42.223594 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:16:42.223840 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:16:42.226382 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:16:42.226602 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:16:42.235514 kernel: scsi host1: ahci Apr 21 10:16:42.236537 kernel: scsi host2: ahci Apr 21 10:16:42.239575 kernel: scsi host3: ahci Apr 21 10:16:42.239806 kernel: scsi host4: ahci Apr 21 10:16:42.242584 kernel: scsi host5: ahci Apr 21 10:16:42.244503 kernel: scsi host6: ahci Apr 21 10:16:42.244729 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Apr 21 10:16:42.244744 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Apr 21 10:16:42.244754 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Apr 21 10:16:42.244764 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Apr 21 10:16:42.244783 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Apr 21 10:16:42.244793 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Apr 21 10:16:42.249652 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 21 10:16:42.250384 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 21 10:16:42.250777 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 10:16:42.251017 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 21 10:16:42.252606 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 21 10:16:42.252772 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:16:42.252793 kernel: GPT:9289727 != 167739391 Apr 21 10:16:42.252803 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:16:42.252813 kernel: GPT:9289727 != 167739391 Apr 21 10:16:42.252822 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:16:42.252832 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:16:42.252842 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 10:16:42.380447 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:42.387761 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:16:42.424114 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:16:42.553534 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 21 10:16:42.553634 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:16:42.563361 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 10:16:42.563511 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:16:42.566518 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:16:42.566575 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:16:42.609510 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (452) Apr 21 10:16:42.619502 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (441) Apr 21 10:16:42.623135 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 21 10:16:42.629525 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 21 10:16:42.635417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:16:42.640775 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 21 10:16:42.641704 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 21 10:16:42.648938 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:16:42.658512 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:16:42.658693 disk-uuid[566]: Primary Header is updated. Apr 21 10:16:42.658693 disk-uuid[566]: Secondary Entries is updated. Apr 21 10:16:42.658693 disk-uuid[566]: Secondary Header is updated. Apr 21 10:16:43.676533 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:16:43.678331 disk-uuid[567]: The operation has completed successfully. Apr 21 10:16:43.740935 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:16:43.741101 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:16:43.744782 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:16:43.752663 sh[584]: Success Apr 21 10:16:43.769533 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:16:43.825250 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:16:43.834629 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:16:43.836560 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:16:43.867033 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:16:43.867082 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:16:43.869528 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:16:43.873643 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:16:43.878360 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:16:43.887532 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:16:43.890379 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:16:43.892230 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:16:43.902771 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:16:43.905640 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:16:43.928631 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:43.928700 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:16:43.928721 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:16:43.936829 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:16:43.936865 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:16:43.953825 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:43.953538 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:16:43.961846 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:16:43.967674 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:16:44.052027 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:16:44.059648 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:16:44.059990 ignition[697]: Ignition 2.19.0 Apr 21 10:16:44.060002 ignition[697]: Stage: fetch-offline Apr 21 10:16:44.060072 ignition[697]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:44.060101 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:16:44.066849 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:16:44.060248 ignition[697]: parsed url from cmdline: "" Apr 21 10:16:44.060253 ignition[697]: no config URL provided Apr 21 10:16:44.060259 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:16:44.060270 ignition[697]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:16:44.060278 ignition[697]: failed to fetch config: resource requires networking Apr 21 10:16:44.061907 ignition[697]: Ignition finished successfully Apr 21 10:16:44.096435 systemd-networkd[769]: lo: Link UP Apr 21 10:16:44.096451 systemd-networkd[769]: lo: Gained carrier Apr 21 10:16:44.098312 systemd-networkd[769]: Enumeration completed Apr 21 10:16:44.098442 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:16:44.099275 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:16:44.099280 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:16:44.100069 systemd[1]: Reached target network.target - Network. Apr 21 10:16:44.101454 systemd-networkd[769]: eth0: Link UP Apr 21 10:16:44.101460 systemd-networkd[769]: eth0: Gained carrier Apr 21 10:16:44.101468 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:16:44.111745 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:16:44.128627 ignition[773]: Ignition 2.19.0 Apr 21 10:16:44.128642 ignition[773]: Stage: fetch Apr 21 10:16:44.128814 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:44.128827 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:16:44.128916 ignition[773]: parsed url from cmdline: "" Apr 21 10:16:44.128921 ignition[773]: no config URL provided Apr 21 10:16:44.128926 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:16:44.128936 ignition[773]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:16:44.128958 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 21 10:16:44.129150 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:16:44.329250 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 21 10:16:44.329459 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:16:44.730034 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 21 10:16:44.730192 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:16:44.884600 systemd-networkd[769]: eth0: DHCPv4 address 172.236.109.217/24, gateway 172.236.109.1 acquired from 23.205.167.221 Apr 21 10:16:45.530380 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 21 10:16:45.626504 ignition[773]: PUT result: OK Apr 21 10:16:45.626587 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 21 10:16:45.737017 ignition[773]: GET result: OK Apr 21 10:16:45.737187 ignition[773]: parsing config with SHA512: 412b4a153299512a51d2e7cf9b912a12482b5054bdc45155fc8d8749db028ffa74c205c1eaccf7a0c60fae82d020295e731c965a6a82b2136cdd523731644d8b Apr 21 10:16:45.743103 unknown[773]: fetched base config from "system" Apr 21 10:16:45.743123 unknown[773]: fetched base config from "system" Apr 21 10:16:45.744903 ignition[773]: fetch: fetch complete Apr 21 10:16:45.743139 unknown[773]: fetched user config from "akamai" Apr 21 10:16:45.744928 ignition[773]: fetch: fetch passed Apr 21 10:16:45.745006 ignition[773]: Ignition finished successfully Apr 21 10:16:45.748529 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:16:45.760645 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:16:45.777551 ignition[781]: Ignition 2.19.0 Apr 21 10:16:45.777570 ignition[781]: Stage: kargs Apr 21 10:16:45.777787 ignition[781]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:45.777804 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:16:45.779413 ignition[781]: kargs: kargs passed Apr 21 10:16:45.781539 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:16:45.779511 ignition[781]: Ignition finished successfully Apr 21 10:16:45.789636 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:16:45.804164 ignition[787]: Ignition 2.19.0 Apr 21 10:16:45.804181 ignition[787]: Stage: disks Apr 21 10:16:45.804416 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:45.808283 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:16:45.804432 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:16:45.831097 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:16:45.806116 ignition[787]: disks: disks passed Apr 21 10:16:45.832384 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:16:45.806196 ignition[787]: Ignition finished successfully Apr 21 10:16:45.834179 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:16:45.835916 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:16:45.837409 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:16:45.845707 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:16:45.863365 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:16:45.866999 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:16:45.873602 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:16:45.976559 kernel: EXT4-fs (sda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:16:45.977664 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:16:45.979144 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:16:45.986612 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:16:45.990370 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:16:45.991967 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:16:45.992143 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:16:45.992324 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:16:46.004516 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (803) Apr 21 10:16:46.008614 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:16:46.015414 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:46.015456 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:16:46.015475 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:16:46.023513 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:16:46.023558 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:16:46.026665 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:16:46.030265 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:16:46.078868 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:16:46.083386 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:16:46.089469 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:16:46.093942 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:16:46.126984 systemd-networkd[769]: eth0: Gained IPv6LL Apr 21 10:16:46.193340 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:16:46.199570 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:16:46.202301 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:16:46.209329 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:16:46.214186 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:46.239014 ignition[921]: INFO : Ignition 2.19.0 Apr 21 10:16:46.241576 ignition[921]: INFO : Stage: mount Apr 21 10:16:46.241576 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:46.241576 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:16:46.241576 ignition[921]: INFO : mount: mount passed Apr 21 10:16:46.241576 ignition[921]: INFO : Ignition finished successfully Apr 21 10:16:46.242908 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:16:46.244192 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:16:46.251601 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:16:46.982626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:16:47.000516 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (934) Apr 21 10:16:47.007842 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:47.007874 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:16:47.007892 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:16:47.017310 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:16:47.017344 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:16:47.020543 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:16:47.055372 ignition[951]: INFO : Ignition 2.19.0 Apr 21 10:16:47.055372 ignition[951]: INFO : Stage: files Apr 21 10:16:47.055372 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:47.058167 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:16:47.058167 ignition[951]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:16:47.062269 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:16:47.062269 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:16:47.065082 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:16:47.066346 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:16:47.066346 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:16:47.066214 unknown[951]: wrote ssh authorized keys file for user: core Apr 21 10:16:47.069784 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 10:16:47.069784 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 10:16:47.069784 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:16:47.069784 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:16:47.356778 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 10:16:47.476926 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:16:47.478511 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:16:47.489716 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:16:47.489716 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:16:47.489716 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:16:47.489716 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 10:16:47.785708 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 10:16:48.032655 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:16:48.032655 ignition[951]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:16:48.058006 ignition[951]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:16:48.058006 ignition[951]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:16:48.058006 ignition[951]: INFO : files: files passed Apr 21 10:16:48.058006 ignition[951]: INFO : Ignition finished successfully Apr 21 10:16:48.041997 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:16:48.066668 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:16:48.071655 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:16:48.073236 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:16:48.073358 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:16:48.092832 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:16:48.094573 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:16:48.094573 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:16:48.096206 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:16:48.097714 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:16:48.103730 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:16:48.129985 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:16:48.130119 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:16:48.132346 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:16:48.133537 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:16:48.135295 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:16:48.140729 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:16:48.155433 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:16:48.163643 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:16:48.173869 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:16:48.174846 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:16:48.176600 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:16:48.178143 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:16:48.178294 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:16:48.180308 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:16:48.181380 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:16:48.182972 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:16:48.184425 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:16:48.185877 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:16:48.187501 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:16:48.189108 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:16:48.190816 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:16:48.192347 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:16:48.194020 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:16:48.195543 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:16:48.195669 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:16:48.197401 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:16:48.198456 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:16:48.199920 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:16:48.200030 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:16:48.201566 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:16:48.201666 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:16:48.203813 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:16:48.203923 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:16:48.204930 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:16:48.205028 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:16:48.216643 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:16:48.217679 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:16:48.217795 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:16:48.222700 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:16:48.223663 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:16:48.223780 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:16:48.227132 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:16:48.227232 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:16:48.235841 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:16:48.236530 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:16:48.240533 ignition[1004]: INFO : Ignition 2.19.0 Apr 21 10:16:48.240533 ignition[1004]: INFO : Stage: umount Apr 21 10:16:48.240533 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:48.240533 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 21 10:16:48.246537 ignition[1004]: INFO : umount: umount passed Apr 21 10:16:48.246537 ignition[1004]: INFO : Ignition finished successfully Apr 21 10:16:48.247991 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:16:48.248118 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:16:48.275123 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:16:48.275556 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:16:48.275608 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:16:48.277928 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:16:48.277980 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:16:48.279380 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:16:48.279442 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:16:48.280858 systemd[1]: Stopped target network.target - Network. Apr 21 10:16:48.282384 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:16:48.282441 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:16:48.287770 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:16:48.289155 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:16:48.290557 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:16:48.291466 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:16:48.293124 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:16:48.294839 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:16:48.294889 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:16:48.297009 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:16:48.297053 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:16:48.298626 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:16:48.298685 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:16:48.301241 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:16:48.301310 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:16:48.305436 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:16:48.307697 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:16:48.309437 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:16:48.309623 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:16:48.309662 systemd-networkd[769]: eth0: DHCPv6 lease lost Apr 21 10:16:48.311840 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:16:48.311966 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:16:48.317209 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:16:48.317282 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:16:48.320754 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:16:48.320809 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:16:48.332668 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:16:48.333407 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:16:48.333504 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:16:48.335308 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:16:48.339958 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:16:48.340089 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:16:48.349820 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:16:48.350023 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:16:48.358792 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:16:48.358872 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:16:48.360575 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:16:48.360618 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:16:48.362332 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:16:48.362387 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:16:48.365051 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:16:48.365120 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:16:48.366779 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:16:48.366851 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:16:48.375733 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:16:48.377521 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:16:48.378366 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:16:48.379189 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:16:48.379245 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:16:48.381719 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:16:48.381774 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:16:48.382742 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:16:48.382794 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:16:48.384534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:16:48.384592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:48.386727 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:16:48.386843 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:16:48.388326 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:16:48.388430 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:16:48.390373 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:16:48.398660 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:16:48.407644 systemd[1]: Switching root. Apr 21 10:16:48.433958 systemd-journald[178]: Journal stopped Apr 21 10:16:49.691605 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Apr 21 10:16:49.691635 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:16:49.691648 kernel: SELinux: policy capability open_perms=1 Apr 21 10:16:49.691658 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:16:49.691671 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:16:49.691681 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:16:49.691691 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:16:49.691700 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:16:49.691709 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:16:49.691719 kernel: audit: type=1403 audit(1776766608.638:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:16:49.691729 systemd[1]: Successfully loaded SELinux policy in 52.948ms. Apr 21 10:16:49.691743 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.652ms. Apr 21 10:16:49.691754 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:16:49.691765 systemd[1]: Detected virtualization kvm. Apr 21 10:16:49.691775 systemd[1]: Detected architecture x86-64. Apr 21 10:16:49.691785 systemd[1]: Detected first boot. Apr 21 10:16:49.691798 systemd[1]: Initializing machine ID from random generator. Apr 21 10:16:49.691808 zram_generator::config[1068]: No configuration found. Apr 21 10:16:49.691819 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:16:49.691829 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:16:49.691840 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 21 10:16:49.691851 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:16:49.691861 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:16:49.691877 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:16:49.691887 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:16:49.691898 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:16:49.691909 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:16:49.691919 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:16:49.691929 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:16:49.691939 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:16:49.691951 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:16:49.691962 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:16:49.691972 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:16:49.691982 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:16:49.691992 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:16:49.692002 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:16:49.692012 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:16:49.692022 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:16:49.692035 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:16:49.692045 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:16:49.692059 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:16:49.692069 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:16:49.692079 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:16:49.692090 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:16:49.692101 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:16:49.692111 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:16:49.692124 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:16:49.692135 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:16:49.692145 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:16:49.692155 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:16:49.692166 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:16:49.692179 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:16:49.692189 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:16:49.692200 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:49.692210 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:16:49.692220 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:16:49.692230 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:16:49.692240 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:16:49.692251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:16:49.692264 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:16:49.692275 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:16:49.692286 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:16:49.692296 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:16:49.692306 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:16:49.692317 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:16:49.692328 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:16:49.692339 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:16:49.692352 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 21 10:16:49.692363 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 21 10:16:49.692374 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:16:49.692384 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:16:49.692395 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:16:49.692426 systemd-journald[1160]: Collecting audit messages is disabled. Apr 21 10:16:49.692453 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:16:49.692465 systemd-journald[1160]: Journal started Apr 21 10:16:49.700557 systemd-journald[1160]: Runtime Journal (/run/log/journal/22b10385f840496cb6e466d9a0954d41) is 8.0M, max 78.3M, 70.3M free. Apr 21 10:16:49.719206 kernel: ACPI: bus type drm_connector registered Apr 21 10:16:49.719283 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:16:49.725556 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:49.737226 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:16:49.734774 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:16:49.735630 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:16:49.736466 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:16:49.737794 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:16:49.738660 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:16:49.739534 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:16:49.740633 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:16:49.742179 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:16:49.744377 kernel: fuse: init (API version 7.39) Apr 21 10:16:49.748509 kernel: loop: module loaded Apr 21 10:16:49.746377 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:16:49.746987 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:16:49.752004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:16:49.752223 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:16:49.755089 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:16:49.755374 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:16:49.756575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:16:49.756850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:16:49.758013 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:16:49.758285 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:16:49.759415 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:16:49.760107 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:16:49.761350 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:16:49.762578 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:16:49.764128 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:16:49.781309 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:16:49.787596 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:16:49.797190 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:16:49.798668 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:16:49.807696 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:16:49.821737 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:16:49.824655 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:16:49.830023 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:16:49.831000 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:16:49.861602 systemd-journald[1160]: Time spent on flushing to /var/log/journal/22b10385f840496cb6e466d9a0954d41 is 50.868ms for 958 entries. Apr 21 10:16:49.861602 systemd-journald[1160]: System Journal (/var/log/journal/22b10385f840496cb6e466d9a0954d41) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:16:49.929718 systemd-journald[1160]: Received client request to flush runtime journal. Apr 21 10:16:49.838044 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:16:49.880648 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:16:49.886568 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:16:49.888695 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:16:49.890997 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:16:49.899549 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:16:49.907244 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:16:49.919721 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:16:49.934759 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:16:49.948270 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Apr 21 10:16:49.948292 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Apr 21 10:16:49.955758 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 21 10:16:49.962189 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:16:49.966528 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:16:49.975743 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:16:50.006424 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:16:50.017894 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:16:50.037567 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Apr 21 10:16:50.037939 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Apr 21 10:16:50.044184 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:16:50.379240 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:16:50.387700 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:16:50.417898 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Apr 21 10:16:50.438463 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:16:50.448630 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:16:50.472704 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:16:50.528702 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:16:50.550293 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 21 10:16:50.606621 systemd-networkd[1241]: lo: Link UP Apr 21 10:16:50.607049 systemd-networkd[1241]: lo: Gained carrier Apr 21 10:16:50.609033 systemd-networkd[1241]: Enumeration completed Apr 21 10:16:50.610164 systemd-networkd[1241]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:16:50.610175 systemd-networkd[1241]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:16:50.610915 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:16:50.611597 systemd-networkd[1241]: eth0: Link UP Apr 21 10:16:50.611678 systemd-networkd[1241]: eth0: Gained carrier Apr 21 10:16:50.611775 systemd-networkd[1241]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:16:50.620665 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:16:50.637510 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1240) Apr 21 10:16:50.641504 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 21 10:16:50.667505 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:16:50.680760 systemd-networkd[1241]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:16:50.724541 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 21 10:16:50.733564 kernel: EDAC MC: Ver: 3.0.0 Apr 21 10:16:50.746526 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:16:50.754579 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:16:50.754598 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:16:50.754786 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:16:50.754413 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:16:50.762724 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:16:50.772892 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:16:50.780724 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:16:50.790816 lvm[1279]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:16:50.817584 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:16:50.819459 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:16:50.837692 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:16:50.914277 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:50.923967 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:16:50.959605 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:16:50.962468 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:16:50.963293 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:16:50.963324 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:16:50.964304 systemd[1]: Reached target machines.target - Containers. Apr 21 10:16:50.966134 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:16:50.977633 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:16:50.980635 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:16:50.982680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:16:50.988645 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:16:50.994438 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:16:50.998914 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:16:51.004629 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:16:51.008016 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:16:51.021291 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:16:51.023026 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:16:51.033608 kernel: loop0: detected capacity change from 0 to 140768 Apr 21 10:16:51.057518 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:16:51.082520 kernel: loop1: detected capacity change from 0 to 142488 Apr 21 10:16:51.131516 kernel: loop2: detected capacity change from 0 to 228704 Apr 21 10:16:51.178513 kernel: loop3: detected capacity change from 0 to 8 Apr 21 10:16:51.205523 kernel: loop4: detected capacity change from 0 to 140768 Apr 21 10:16:51.226514 kernel: loop5: detected capacity change from 0 to 142488 Apr 21 10:16:51.248582 kernel: loop6: detected capacity change from 0 to 228704 Apr 21 10:16:51.265516 kernel: loop7: detected capacity change from 0 to 8 Apr 21 10:16:51.269608 (sd-merge)[1308]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Apr 21 10:16:51.271073 (sd-merge)[1308]: Merged extensions into '/usr'. Apr 21 10:16:51.289132 systemd[1]: Reloading requested from client PID 1296 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:16:51.289343 systemd[1]: Reloading... Apr 21 10:16:51.353586 systemd-networkd[1241]: eth0: DHCPv4 address 172.236.109.217/24, gateway 172.236.109.1 acquired from 23.205.167.221 Apr 21 10:16:51.391518 zram_generator::config[1336]: No configuration found. Apr 21 10:16:51.428399 ldconfig[1291]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:16:51.532199 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:16:51.606009 systemd[1]: Reloading finished in 316 ms. Apr 21 10:16:51.625135 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:16:51.626710 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:16:51.641758 systemd[1]: Starting ensure-sysext.service... Apr 21 10:16:51.647646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:16:51.651694 systemd[1]: Reloading requested from client PID 1386 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:16:51.651791 systemd[1]: Reloading... Apr 21 10:16:51.681729 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:16:51.682174 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:16:51.684872 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:16:51.685154 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Apr 21 10:16:51.685227 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Apr 21 10:16:51.689835 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:16:51.689854 systemd-tmpfiles[1388]: Skipping /boot Apr 21 10:16:51.706426 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:16:51.706524 systemd-tmpfiles[1388]: Skipping /boot Apr 21 10:16:51.757569 zram_generator::config[1416]: No configuration found. Apr 21 10:16:51.864562 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:16:51.929222 systemd[1]: Reloading finished in 276 ms. Apr 21 10:16:51.949283 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:16:51.965765 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:16:51.981753 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:16:51.986640 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:16:51.995073 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:16:52.005428 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:16:52.014372 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:52.014943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:16:52.019590 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:16:52.028403 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:16:52.043241 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:16:52.045231 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:16:52.045913 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:52.049473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:16:52.049749 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:16:52.055684 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:16:52.055969 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:16:52.057942 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:16:52.061935 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:52.062122 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:16:52.070685 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:16:52.084837 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:16:52.087013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:16:52.087583 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:52.090764 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:16:52.092809 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:16:52.093018 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:16:52.097415 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:16:52.101105 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:16:52.101330 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:16:52.103097 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:16:52.108273 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:16:52.108997 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:16:52.119657 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:52.119891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:16:52.128902 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:16:52.132691 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:16:52.146754 augenrules[1511]: No rules Apr 21 10:16:52.148419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:16:52.163627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:16:52.164826 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:16:52.172373 systemd-resolved[1472]: Positive Trust Anchors: Apr 21 10:16:52.172387 systemd-resolved[1472]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:16:52.172414 systemd-resolved[1472]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:16:52.181149 systemd-resolved[1472]: Defaulting to hostname 'linux'. Apr 21 10:16:52.184732 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:16:52.187540 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:52.188757 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:16:52.190108 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:16:52.194966 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:16:52.196780 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:16:52.197052 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:16:52.198395 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:16:52.198792 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:16:52.200271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:16:52.200623 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:16:52.202093 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:16:52.202630 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:16:52.209551 systemd[1]: Finished ensure-sysext.service. Apr 21 10:16:52.212943 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:16:52.220625 systemd[1]: Reached target network.target - Network. Apr 21 10:16:52.221397 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:16:52.222310 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:16:52.222370 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:16:52.227628 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:16:52.228403 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:16:52.295038 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:16:52.296098 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:16:52.297046 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:16:52.297984 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:16:52.298851 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:16:52.299725 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:16:52.299760 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:16:52.300525 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:16:52.301593 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:16:52.302628 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:16:52.303404 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:16:52.305068 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:16:52.307943 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:16:52.310077 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:16:52.320775 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:16:52.321621 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:16:52.322335 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:16:52.323259 systemd[1]: System is tainted: cgroupsv1 Apr 21 10:16:52.323298 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:16:52.323343 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:16:52.334602 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:16:52.339617 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 10:16:52.344842 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:16:52.352634 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:16:52.367014 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:16:52.369129 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:16:52.373694 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:16:52.377788 jq[1543]: false Apr 21 10:16:52.386086 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:16:52.401200 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:16:52.410665 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:16:52.426870 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:16:52.432063 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:16:52.993024 systemd-resolved[1472]: Clock change detected. Flushing caches. Apr 21 10:16:52.993215 systemd-timesyncd[1535]: Contacted time server 158.51.99.19:123 (0.flatcar.pool.ntp.org). Apr 21 10:16:52.993563 systemd-timesyncd[1535]: Initial clock synchronization to Tue 2026-04-21 10:16:52.992984 UTC. Apr 21 10:16:52.994656 dbus-daemon[1542]: [system] SELinux support is enabled Apr 21 10:16:52.996391 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:16:53.018822 extend-filesystems[1545]: Found loop4 Apr 21 10:16:53.018822 extend-filesystems[1545]: Found loop5 Apr 21 10:16:53.018822 extend-filesystems[1545]: Found loop6 Apr 21 10:16:53.018822 extend-filesystems[1545]: Found loop7 Apr 21 10:16:53.018822 extend-filesystems[1545]: Found sda Apr 21 10:16:53.018822 extend-filesystems[1545]: Found sda1 Apr 21 10:16:53.018822 extend-filesystems[1545]: Found sda2 Apr 21 10:16:53.018822 extend-filesystems[1545]: Found sda3 Apr 21 10:16:53.018822 extend-filesystems[1545]: Found usr Apr 21 10:16:53.018822 extend-filesystems[1545]: Found sda4 Apr 21 10:16:53.018822 extend-filesystems[1545]: Found sda6 Apr 21 10:16:53.018822 extend-filesystems[1545]: Found sda7 Apr 21 10:16:53.018822 extend-filesystems[1545]: Found sda9 Apr 21 10:16:53.018822 extend-filesystems[1545]: Checking size of /dev/sda9 Apr 21 10:16:53.113715 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Apr 21 10:16:53.007807 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:16:53.011778 dbus-daemon[1542]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1241 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 21 10:16:53.113899 extend-filesystems[1545]: Resized partition /dev/sda9 Apr 21 10:16:53.131917 coreos-metadata[1541]: Apr 21 10:16:53.050 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 21 10:16:53.019795 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:16:53.090345 dbus-daemon[1542]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 21 10:16:53.139229 extend-filesystems[1581]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:16:53.153053 update_engine[1564]: I20260421 10:16:53.134943 1564 main.cc:92] Flatcar Update Engine starting Apr 21 10:16:53.030025 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:16:53.153630 jq[1567]: true Apr 21 10:16:53.030455 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:16:53.032724 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:16:53.157260 tar[1571]: linux-amd64/LICENSE Apr 21 10:16:53.157260 tar[1571]: linux-amd64/helm Apr 21 10:16:53.033132 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:16:53.157656 jq[1575]: true Apr 21 10:16:53.044168 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:16:53.044470 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:16:53.083324 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:16:53.083374 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:16:53.085741 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:16:53.085764 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:16:53.100681 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 21 10:16:53.101813 (ntainerd)[1580]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:16:53.147660 systemd-networkd[1241]: eth0: Gained IPv6LL Apr 21 10:16:53.222055 update_engine[1564]: I20260421 10:16:53.163720 1564 update_check_scheduler.cc:74] Next update check in 9m35s Apr 21 10:16:53.222101 coreos-metadata[1541]: Apr 21 10:16:53.220 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Apr 21 10:16:53.161041 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:16:53.164829 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:16:53.173764 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:16:53.179157 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:16:53.183725 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:16:53.226179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:16:53.243699 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:16:53.270352 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1243) Apr 21 10:16:53.364795 systemd-logind[1562]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:16:53.364837 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:16:53.369303 systemd-logind[1562]: New seat seat0. Apr 21 10:16:53.374651 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:16:53.376953 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:16:53.396390 bash[1618]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:16:53.397059 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:16:53.410992 systemd[1]: Starting sshkeys.service... Apr 21 10:16:53.467448 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 10:16:53.476188 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 10:16:53.549834 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:16:53.558006 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:16:53.582573 coreos-metadata[1541]: Apr 21 10:16:53.582 INFO Fetch successful Apr 21 10:16:53.582573 coreos-metadata[1541]: Apr 21 10:16:53.582 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Apr 21 10:16:53.596070 dbus-daemon[1542]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 21 10:16:53.598289 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 21 10:16:53.600764 dbus-daemon[1542]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1587 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 21 10:16:53.603692 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Apr 21 10:16:53.611525 systemd[1]: Starting polkit.service - Authorization Manager... Apr 21 10:16:53.620768 extend-filesystems[1581]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 21 10:16:53.620768 extend-filesystems[1581]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 21 10:16:53.620768 extend-filesystems[1581]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Apr 21 10:16:53.683740 extend-filesystems[1545]: Resized filesystem in /dev/sda9 Apr 21 10:16:53.685619 containerd[1580]: time="2026-04-21T10:16:53.679399736Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:16:53.685993 coreos-metadata[1634]: Apr 21 10:16:53.659 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 21 10:16:53.645656 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:16:53.645968 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:16:53.680952 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:16:53.698324 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:16:53.700743 polkitd[1646]: Started polkitd version 121 Apr 21 10:16:53.707175 polkitd[1646]: Loading rules from directory /etc/polkit-1/rules.d Apr 21 10:16:53.707233 polkitd[1646]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 21 10:16:53.709838 polkitd[1646]: Finished loading, compiling and executing 2 rules Apr 21 10:16:53.710286 dbus-daemon[1542]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 21 10:16:53.710634 polkitd[1646]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 21 10:16:53.710842 systemd[1]: Started polkit.service - Authorization Manager. Apr 21 10:16:53.720992 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:16:53.721304 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:16:53.733060 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:16:53.746572 containerd[1580]: time="2026-04-21T10:16:53.743200650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:16:53.752655 systemd-hostnamed[1587]: Hostname set to <172-236-109-217> (transient) Apr 21 10:16:53.755070 systemd-resolved[1472]: System hostname changed to '172-236-109-217'. Apr 21 10:16:53.759608 containerd[1580]: time="2026-04-21T10:16:53.756268953Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:16:53.759608 containerd[1580]: time="2026-04-21T10:16:53.756308963Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:16:53.759608 containerd[1580]: time="2026-04-21T10:16:53.756329593Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:16:53.759608 containerd[1580]: time="2026-04-21T10:16:53.756506983Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:16:53.759608 containerd[1580]: time="2026-04-21T10:16:53.756527493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:16:53.759608 containerd[1580]: time="2026-04-21T10:16:53.756627893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:16:53.759608 containerd[1580]: time="2026-04-21T10:16:53.756645073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:16:53.759608 containerd[1580]: time="2026-04-21T10:16:53.756874144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:16:53.759608 containerd[1580]: time="2026-04-21T10:16:53.756889694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:16:53.759608 containerd[1580]: time="2026-04-21T10:16:53.756906704Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:16:53.759608 containerd[1580]: time="2026-04-21T10:16:53.756919794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:16:53.759821 containerd[1580]: time="2026-04-21T10:16:53.757012444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:16:53.759821 containerd[1580]: time="2026-04-21T10:16:53.757260284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:16:53.759821 containerd[1580]: time="2026-04-21T10:16:53.757481064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:16:53.759821 containerd[1580]: time="2026-04-21T10:16:53.757499744Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:16:53.764670 containerd[1580]: time="2026-04-21T10:16:53.764643401Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:16:53.767057 containerd[1580]: time="2026-04-21T10:16:53.767036124Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:16:53.772036 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:16:53.777609 coreos-metadata[1634]: Apr 21 10:16:53.773 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Apr 21 10:16:53.777795 containerd[1580]: time="2026-04-21T10:16:53.777760934Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:16:53.777830 containerd[1580]: time="2026-04-21T10:16:53.777819614Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:16:53.777850 containerd[1580]: time="2026-04-21T10:16:53.777837334Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:16:53.777868 containerd[1580]: time="2026-04-21T10:16:53.777852045Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:16:53.777885 containerd[1580]: time="2026-04-21T10:16:53.777865735Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:16:53.778028 containerd[1580]: time="2026-04-21T10:16:53.778004485Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:16:53.779326 containerd[1580]: time="2026-04-21T10:16:53.778256015Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:16:53.779326 containerd[1580]: time="2026-04-21T10:16:53.778366025Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:16:53.779326 containerd[1580]: time="2026-04-21T10:16:53.778389605Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:16:53.779326 containerd[1580]: time="2026-04-21T10:16:53.778410995Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:16:53.779326 containerd[1580]: time="2026-04-21T10:16:53.778431285Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:16:53.779326 containerd[1580]: time="2026-04-21T10:16:53.778443055Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:16:53.779326 containerd[1580]: time="2026-04-21T10:16:53.778454175Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:16:53.779326 containerd[1580]: time="2026-04-21T10:16:53.778467215Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:16:53.779326 containerd[1580]: time="2026-04-21T10:16:53.778481065Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:16:53.779326 containerd[1580]: time="2026-04-21T10:16:53.778493005Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:16:53.779326 containerd[1580]: time="2026-04-21T10:16:53.778503635Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:16:53.779326 containerd[1580]: time="2026-04-21T10:16:53.778514665Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:16:53.779326 containerd[1580]: time="2026-04-21T10:16:53.778532635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.779326 containerd[1580]: time="2026-04-21T10:16:53.778578645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778596945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778613325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778624405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778635995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778646275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778658125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778673675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778686085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778696785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778707715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778720525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778733595Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778753245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778767875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781637 containerd[1580]: time="2026-04-21T10:16:53.778777735Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:16:53.781881 containerd[1580]: time="2026-04-21T10:16:53.778824335Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:16:53.781881 containerd[1580]: time="2026-04-21T10:16:53.778839685Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:16:53.781881 containerd[1580]: time="2026-04-21T10:16:53.778849605Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:16:53.781881 containerd[1580]: time="2026-04-21T10:16:53.778860266Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:16:53.781881 containerd[1580]: time="2026-04-21T10:16:53.778869496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.781881 containerd[1580]: time="2026-04-21T10:16:53.778881036Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:16:53.781881 containerd[1580]: time="2026-04-21T10:16:53.778895306Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:16:53.781881 containerd[1580]: time="2026-04-21T10:16:53.778913236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:16:53.782013 containerd[1580]: time="2026-04-21T10:16:53.779141926Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:16:53.782013 containerd[1580]: time="2026-04-21T10:16:53.779196586Z" level=info msg="Connect containerd service" Apr 21 10:16:53.782013 containerd[1580]: time="2026-04-21T10:16:53.779227816Z" level=info msg="using legacy CRI server" Apr 21 10:16:53.782013 containerd[1580]: time="2026-04-21T10:16:53.779234176Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:16:53.782013 containerd[1580]: time="2026-04-21T10:16:53.779313676Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:16:53.785200 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:16:53.785696 containerd[1580]: time="2026-04-21T10:16:53.785509542Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:16:53.785834 containerd[1580]: time="2026-04-21T10:16:53.785811182Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:16:53.786034 containerd[1580]: time="2026-04-21T10:16:53.785870913Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:16:53.786034 containerd[1580]: time="2026-04-21T10:16:53.785958963Z" level=info msg="Start subscribing containerd event" Apr 21 10:16:53.786034 containerd[1580]: time="2026-04-21T10:16:53.785990693Z" level=info msg="Start recovering state" Apr 21 10:16:53.786104 containerd[1580]: time="2026-04-21T10:16:53.786049273Z" level=info msg="Start event monitor" Apr 21 10:16:53.786104 containerd[1580]: time="2026-04-21T10:16:53.786074333Z" level=info msg="Start snapshots syncer" Apr 21 10:16:53.786104 containerd[1580]: time="2026-04-21T10:16:53.786083293Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:16:53.786104 containerd[1580]: time="2026-04-21T10:16:53.786090483Z" level=info msg="Start streaming server" Apr 21 10:16:53.786172 containerd[1580]: time="2026-04-21T10:16:53.786136313Z" level=info msg="containerd successfully booted in 0.109630s" Apr 21 10:16:53.793037 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:16:53.794823 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:16:53.796752 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:16:53.840090 coreos-metadata[1541]: Apr 21 10:16:53.837 INFO Fetch successful Apr 21 10:16:53.912276 coreos-metadata[1634]: Apr 21 10:16:53.911 INFO Fetch successful Apr 21 10:16:53.962603 update-ssh-keys[1696]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:16:53.965774 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 10:16:53.973015 systemd[1]: Finished sshkeys.service. Apr 21 10:16:53.996841 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 10:16:53.998603 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:16:54.118979 tar[1571]: linux-amd64/README.md Apr 21 10:16:54.134069 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:16:54.585719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:16:54.587353 (kubelet)[1725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:16:54.588342 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:16:54.589770 systemd[1]: Startup finished in 8.936s (kernel) + 5.451s (userspace) = 14.387s. Apr 21 10:16:55.132031 kubelet[1725]: E0421 10:16:55.131971 1725 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:16:55.135534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:16:55.135851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:16:56.581513 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:16:56.587898 systemd[1]: Started sshd@0-172.236.109.217:22-50.85.169.122:51660.service - OpenSSH per-connection server daemon (50.85.169.122:51660). Apr 21 10:16:57.187335 sshd[1736]: Accepted publickey for core from 50.85.169.122 port 51660 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:16:57.189527 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:16:57.198389 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:16:57.203791 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:16:57.207677 systemd-logind[1562]: New session 1 of user core. Apr 21 10:16:57.222301 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:16:57.233863 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:16:57.237283 (systemd)[1742]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:16:57.333790 systemd[1742]: Queued start job for default target default.target. Apr 21 10:16:57.334177 systemd[1742]: Created slice app.slice - User Application Slice. Apr 21 10:16:57.334208 systemd[1742]: Reached target paths.target - Paths. Apr 21 10:16:57.334222 systemd[1742]: Reached target timers.target - Timers. Apr 21 10:16:57.345639 systemd[1742]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:16:57.352739 systemd[1742]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:16:57.352798 systemd[1742]: Reached target sockets.target - Sockets. Apr 21 10:16:57.352811 systemd[1742]: Reached target basic.target - Basic System. Apr 21 10:16:57.352855 systemd[1742]: Reached target default.target - Main User Target. Apr 21 10:16:57.352889 systemd[1742]: Startup finished in 109ms. Apr 21 10:16:57.354705 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:16:57.357741 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:16:57.795908 systemd[1]: Started sshd@1-172.236.109.217:22-50.85.169.122:51672.service - OpenSSH per-connection server daemon (50.85.169.122:51672). Apr 21 10:16:58.417180 sshd[1754]: Accepted publickey for core from 50.85.169.122 port 51672 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:16:58.419238 sshd[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:16:58.424798 systemd-logind[1562]: New session 2 of user core. Apr 21 10:16:58.430936 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:16:58.864912 sshd[1754]: pam_unix(sshd:session): session closed for user core Apr 21 10:16:58.870369 systemd[1]: sshd@1-172.236.109.217:22-50.85.169.122:51672.service: Deactivated successfully. Apr 21 10:16:58.875075 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:16:58.875812 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:16:58.876876 systemd-logind[1562]: Removed session 2. Apr 21 10:16:58.969935 systemd[1]: Started sshd@2-172.236.109.217:22-50.85.169.122:51674.service - OpenSSH per-connection server daemon (50.85.169.122:51674). Apr 21 10:16:59.564931 sshd[1762]: Accepted publickey for core from 50.85.169.122 port 51674 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:16:59.565515 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:16:59.569819 systemd-logind[1562]: New session 3 of user core. Apr 21 10:16:59.576833 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:16:59.987162 sshd[1762]: pam_unix(sshd:session): session closed for user core Apr 21 10:16:59.991741 systemd[1]: sshd@2-172.236.109.217:22-50.85.169.122:51674.service: Deactivated successfully. Apr 21 10:16:59.995323 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:16:59.996151 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:16:59.997477 systemd-logind[1562]: Removed session 3. Apr 21 10:17:00.093751 systemd[1]: Started sshd@3-172.236.109.217:22-50.85.169.122:36870.service - OpenSSH per-connection server daemon (50.85.169.122:36870). Apr 21 10:17:00.720939 sshd[1770]: Accepted publickey for core from 50.85.169.122 port 36870 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:17:00.721535 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:17:00.726301 systemd-logind[1562]: New session 4 of user core. Apr 21 10:17:00.734084 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:17:01.169732 sshd[1770]: pam_unix(sshd:session): session closed for user core Apr 21 10:17:01.174247 systemd[1]: sshd@3-172.236.109.217:22-50.85.169.122:36870.service: Deactivated successfully. Apr 21 10:17:01.178150 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:17:01.178212 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:17:01.180291 systemd-logind[1562]: Removed session 4. Apr 21 10:17:01.280752 systemd[1]: Started sshd@4-172.236.109.217:22-50.85.169.122:36884.service - OpenSSH per-connection server daemon (50.85.169.122:36884). Apr 21 10:17:01.871876 sshd[1778]: Accepted publickey for core from 50.85.169.122 port 36884 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:17:01.874142 sshd[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:17:01.881431 systemd-logind[1562]: New session 5 of user core. Apr 21 10:17:01.887875 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:17:02.222051 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:17:02.222477 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:17:02.247496 sudo[1782]: pam_unix(sudo:session): session closed for user root Apr 21 10:17:02.344486 sshd[1778]: pam_unix(sshd:session): session closed for user core Apr 21 10:17:02.350116 systemd[1]: sshd@4-172.236.109.217:22-50.85.169.122:36884.service: Deactivated successfully. Apr 21 10:17:02.356180 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:17:02.357368 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:17:02.358476 systemd-logind[1562]: Removed session 5. Apr 21 10:17:02.452766 systemd[1]: Started sshd@5-172.236.109.217:22-50.85.169.122:36886.service - OpenSSH per-connection server daemon (50.85.169.122:36886). Apr 21 10:17:03.077190 sshd[1787]: Accepted publickey for core from 50.85.169.122 port 36886 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:17:03.079048 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:17:03.085137 systemd-logind[1562]: New session 6 of user core. Apr 21 10:17:03.091834 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:17:03.426325 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:17:03.426822 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:17:03.430711 sudo[1792]: pam_unix(sudo:session): session closed for user root Apr 21 10:17:03.436495 sudo[1791]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:17:03.436925 sudo[1791]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:17:03.455736 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:17:03.457460 auditctl[1795]: No rules Apr 21 10:17:03.457986 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:17:03.458284 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:17:03.462802 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:17:03.490991 augenrules[1814]: No rules Apr 21 10:17:03.492975 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:17:03.496273 sudo[1791]: pam_unix(sudo:session): session closed for user root Apr 21 10:17:03.598484 sshd[1787]: pam_unix(sshd:session): session closed for user core Apr 21 10:17:03.601622 systemd[1]: sshd@5-172.236.109.217:22-50.85.169.122:36886.service: Deactivated successfully. Apr 21 10:17:03.605892 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:17:03.606902 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:17:03.607913 systemd-logind[1562]: Removed session 6. Apr 21 10:17:03.705764 systemd[1]: Started sshd@6-172.236.109.217:22-50.85.169.122:36900.service - OpenSSH per-connection server daemon (50.85.169.122:36900). Apr 21 10:17:04.329472 sshd[1823]: Accepted publickey for core from 50.85.169.122 port 36900 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:17:04.330205 sshd[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:17:04.336167 systemd-logind[1562]: New session 7 of user core. Apr 21 10:17:04.343931 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:17:04.676954 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:17:04.677423 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:17:04.938764 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:17:04.940450 (dockerd)[1842]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:17:05.218585 dockerd[1842]: time="2026-04-21T10:17:05.215828520Z" level=info msg="Starting up" Apr 21 10:17:05.217316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:17:05.223772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:05.336798 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1217885162-merged.mount: Deactivated successfully. Apr 21 10:17:05.409932 systemd[1]: var-lib-docker-metacopy\x2dcheck2546126266-merged.mount: Deactivated successfully. Apr 21 10:17:05.451762 dockerd[1842]: time="2026-04-21T10:17:05.451433925Z" level=info msg="Loading containers: start." Apr 21 10:17:05.456064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:05.467122 (kubelet)[1873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:17:05.540128 kubelet[1873]: E0421 10:17:05.540084 1873 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:17:05.547683 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:17:05.548530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:17:05.586576 kernel: Initializing XFRM netlink socket Apr 21 10:17:05.684742 systemd-networkd[1241]: docker0: Link UP Apr 21 10:17:05.702079 dockerd[1842]: time="2026-04-21T10:17:05.702023986Z" level=info msg="Loading containers: done." Apr 21 10:17:05.719610 dockerd[1842]: time="2026-04-21T10:17:05.719420493Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:17:05.720053 dockerd[1842]: time="2026-04-21T10:17:05.719535523Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:17:05.720053 dockerd[1842]: time="2026-04-21T10:17:05.719762984Z" level=info msg="Daemon has completed initialization" Apr 21 10:17:05.747808 dockerd[1842]: time="2026-04-21T10:17:05.747721912Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:17:05.747937 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:17:06.230781 containerd[1580]: time="2026-04-21T10:17:06.230715065Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 10:17:06.326038 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck593689530-merged.mount: Deactivated successfully. Apr 21 10:17:06.810188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1438438210.mount: Deactivated successfully. Apr 21 10:17:07.996300 containerd[1580]: time="2026-04-21T10:17:07.996243370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:07.997451 containerd[1580]: time="2026-04-21T10:17:07.997417541Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193995" Apr 21 10:17:07.998077 containerd[1580]: time="2026-04-21T10:17:07.998036091Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:08.000507 containerd[1580]: time="2026-04-21T10:17:08.000485514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:08.001921 containerd[1580]: time="2026-04-21T10:17:08.001648895Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.7708639s" Apr 21 10:17:08.001921 containerd[1580]: time="2026-04-21T10:17:08.001693715Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 21 10:17:08.003929 containerd[1580]: time="2026-04-21T10:17:08.003902177Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 10:17:09.388037 containerd[1580]: time="2026-04-21T10:17:09.387903041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:09.388037 containerd[1580]: time="2026-04-21T10:17:09.387944231Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171453" Apr 21 10:17:09.389303 containerd[1580]: time="2026-04-21T10:17:09.389268692Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:09.391767 containerd[1580]: time="2026-04-21T10:17:09.391733175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:09.392861 containerd[1580]: time="2026-04-21T10:17:09.392747746Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.388811719s" Apr 21 10:17:09.392861 containerd[1580]: time="2026-04-21T10:17:09.392774466Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 21 10:17:09.394041 containerd[1580]: time="2026-04-21T10:17:09.393816257Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 10:17:10.486686 containerd[1580]: time="2026-04-21T10:17:10.486637510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:10.487778 containerd[1580]: time="2026-04-21T10:17:10.487637841Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289762" Apr 21 10:17:10.488765 containerd[1580]: time="2026-04-21T10:17:10.488255321Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:10.490884 containerd[1580]: time="2026-04-21T10:17:10.490861684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:10.491848 containerd[1580]: time="2026-04-21T10:17:10.491820415Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.097660388s" Apr 21 10:17:10.491896 containerd[1580]: time="2026-04-21T10:17:10.491854995Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 21 10:17:10.492314 containerd[1580]: time="2026-04-21T10:17:10.492288885Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 10:17:11.741690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1414275856.mount: Deactivated successfully. Apr 21 10:17:12.451689 containerd[1580]: time="2026-04-21T10:17:12.451646014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:12.453184 containerd[1580]: time="2026-04-21T10:17:12.452592985Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010717" Apr 21 10:17:12.454743 containerd[1580]: time="2026-04-21T10:17:12.453901346Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:12.455842 containerd[1580]: time="2026-04-21T10:17:12.455822958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:12.456740 containerd[1580]: time="2026-04-21T10:17:12.456712129Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.964396044s" Apr 21 10:17:12.456814 containerd[1580]: time="2026-04-21T10:17:12.456799719Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 21 10:17:12.458242 containerd[1580]: time="2026-04-21T10:17:12.458218611Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 10:17:13.035610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551508951.mount: Deactivated successfully. Apr 21 10:17:13.776578 containerd[1580]: time="2026-04-21T10:17:13.776527749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:13.777657 containerd[1580]: time="2026-04-21T10:17:13.777613450Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942244" Apr 21 10:17:13.778423 containerd[1580]: time="2026-04-21T10:17:13.777953430Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:13.781187 containerd[1580]: time="2026-04-21T10:17:13.780726683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:13.781835 containerd[1580]: time="2026-04-21T10:17:13.781786264Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.323407103s" Apr 21 10:17:13.781835 containerd[1580]: time="2026-04-21T10:17:13.781819024Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 21 10:17:13.782835 containerd[1580]: time="2026-04-21T10:17:13.782790825Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 10:17:14.276181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755132462.mount: Deactivated successfully. Apr 21 10:17:14.279756 containerd[1580]: time="2026-04-21T10:17:14.279722162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:14.280615 containerd[1580]: time="2026-04-21T10:17:14.280579343Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Apr 21 10:17:14.281218 containerd[1580]: time="2026-04-21T10:17:14.281054333Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:14.283218 containerd[1580]: time="2026-04-21T10:17:14.283172945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:14.284451 containerd[1580]: time="2026-04-21T10:17:14.283907116Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 501.087631ms" Apr 21 10:17:14.284451 containerd[1580]: time="2026-04-21T10:17:14.283949406Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 21 10:17:14.284670 containerd[1580]: time="2026-04-21T10:17:14.284648307Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 10:17:14.818609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3003549123.mount: Deactivated successfully. Apr 21 10:17:15.798078 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 10:17:15.803986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:15.971742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:15.973897 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:17:16.014891 kubelet[2184]: E0421 10:17:16.014826 2184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:17:16.018786 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:17:16.019046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:17:16.609202 containerd[1580]: time="2026-04-21T10:17:16.609154271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:16.614571 containerd[1580]: time="2026-04-21T10:17:16.614455696Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719432" Apr 21 10:17:16.621388 containerd[1580]: time="2026-04-21T10:17:16.621333033Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:16.630392 containerd[1580]: time="2026-04-21T10:17:16.629870161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:16.631226 containerd[1580]: time="2026-04-21T10:17:16.631190713Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.346513046s" Apr 21 10:17:16.631285 containerd[1580]: time="2026-04-21T10:17:16.631230353Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 21 10:17:19.282335 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:19.294970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:19.332713 systemd[1]: Reloading requested from client PID 2244 ('systemctl') (unit session-7.scope)... Apr 21 10:17:19.332735 systemd[1]: Reloading... Apr 21 10:17:19.462477 zram_generator::config[2288]: No configuration found. Apr 21 10:17:19.575043 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:17:19.650973 systemd[1]: Reloading finished in 317 ms. Apr 21 10:17:19.705753 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:17:19.705890 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:17:19.706360 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:19.708905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:19.863711 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:19.873936 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:17:19.908568 kubelet[2351]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:17:19.908568 kubelet[2351]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:17:19.908568 kubelet[2351]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:17:19.908568 kubelet[2351]: I0421 10:17:19.906978 2351 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:17:20.141999 kubelet[2351]: I0421 10:17:20.141874 2351 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:17:20.141999 kubelet[2351]: I0421 10:17:20.141903 2351 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:17:20.142116 kubelet[2351]: I0421 10:17:20.142096 2351 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:17:20.174671 kubelet[2351]: E0421 10:17:20.174619 2351 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.236.109.217:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.236.109.217:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:17:20.178837 kubelet[2351]: I0421 10:17:20.178632 2351 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:17:20.187983 kubelet[2351]: E0421 10:17:20.187944 2351 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:17:20.187983 kubelet[2351]: I0421 10:17:20.187984 2351 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:17:20.193767 kubelet[2351]: I0421 10:17:20.193729 2351 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:17:20.194923 kubelet[2351]: I0421 10:17:20.194884 2351 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:17:20.195114 kubelet[2351]: I0421 10:17:20.194919 2351 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-109-217","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 10:17:20.195228 kubelet[2351]: I0421 10:17:20.195117 2351 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:17:20.195228 kubelet[2351]: I0421 10:17:20.195132 2351 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:17:20.195333 kubelet[2351]: I0421 10:17:20.195303 2351 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:17:20.200905 kubelet[2351]: I0421 10:17:20.200679 2351 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:17:20.200905 kubelet[2351]: I0421 10:17:20.200708 2351 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:17:20.200905 kubelet[2351]: I0421 10:17:20.200757 2351 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:17:20.200905 kubelet[2351]: I0421 10:17:20.200802 2351 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:17:20.206121 kubelet[2351]: E0421 10:17:20.205974 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.236.109.217:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-109-217&limit=500&resourceVersion=0\": dial tcp 172.236.109.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:17:20.206893 kubelet[2351]: E0421 10:17:20.206612 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.236.109.217:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.109.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:17:20.206893 kubelet[2351]: I0421 10:17:20.206697 2351 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:17:20.207217 kubelet[2351]: I0421 10:17:20.207192 2351 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:17:20.208087 kubelet[2351]: W0421 10:17:20.208064 2351 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:17:20.212247 kubelet[2351]: I0421 10:17:20.212219 2351 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:17:20.212306 kubelet[2351]: I0421 10:17:20.212275 2351 server.go:1289] "Started kubelet" Apr 21 10:17:20.213905 kubelet[2351]: I0421 10:17:20.212365 2351 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:17:20.214152 kubelet[2351]: I0421 10:17:20.214139 2351 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:17:20.214671 kubelet[2351]: I0421 10:17:20.214626 2351 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:17:20.214982 kubelet[2351]: I0421 10:17:20.214957 2351 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:17:20.216484 kubelet[2351]: E0421 10:17:20.215057 2351 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.109.217:6443/api/v1/namespaces/default/events\": dial tcp 172.236.109.217:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-109-217.18a857d8769d656b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-109-217,UID:172-236-109-217,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-109-217,},FirstTimestamp:2026-04-21 10:17:20.212239723 +0000 UTC m=+0.334562316,LastTimestamp:2026-04-21 10:17:20.212239723 +0000 UTC m=+0.334562316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-109-217,}" Apr 21 10:17:20.219597 kubelet[2351]: I0421 10:17:20.217649 2351 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:17:20.219597 kubelet[2351]: I0421 10:17:20.219127 2351 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:17:20.225010 kubelet[2351]: E0421 10:17:20.224979 2351 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:17:20.225265 kubelet[2351]: E0421 10:17:20.225239 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-109-217\" not found" Apr 21 10:17:20.225305 kubelet[2351]: I0421 10:17:20.225289 2351 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:17:20.225526 kubelet[2351]: I0421 10:17:20.225502 2351 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:17:20.225607 kubelet[2351]: I0421 10:17:20.225587 2351 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:17:20.227642 kubelet[2351]: I0421 10:17:20.226856 2351 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:17:20.227642 kubelet[2351]: I0421 10:17:20.226931 2351 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:17:20.227642 kubelet[2351]: E0421 10:17:20.227134 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.236.109.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.109.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:17:20.227642 kubelet[2351]: E0421 10:17:20.227525 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.109.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-109-217?timeout=10s\": dial tcp 172.236.109.217:6443: connect: connection refused" interval="200ms" Apr 21 10:17:20.228330 kubelet[2351]: I0421 10:17:20.228311 2351 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:17:20.248630 kubelet[2351]: I0421 10:17:20.248590 2351 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:17:20.255030 kubelet[2351]: I0421 10:17:20.255003 2351 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:17:20.255189 kubelet[2351]: I0421 10:17:20.255174 2351 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:17:20.255339 kubelet[2351]: I0421 10:17:20.255307 2351 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:17:20.255415 kubelet[2351]: I0421 10:17:20.255403 2351 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:17:20.255538 kubelet[2351]: E0421 10:17:20.255514 2351 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:17:20.257362 kubelet[2351]: E0421 10:17:20.257342 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.236.109.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.109.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:17:20.263477 kubelet[2351]: I0421 10:17:20.263456 2351 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:17:20.263563 kubelet[2351]: I0421 10:17:20.263473 2351 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:17:20.263563 kubelet[2351]: I0421 10:17:20.263507 2351 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:17:20.268614 kubelet[2351]: I0421 10:17:20.268584 2351 policy_none.go:49] "None policy: Start" Apr 21 10:17:20.268614 kubelet[2351]: I0421 10:17:20.268604 2351 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:17:20.268760 kubelet[2351]: I0421 10:17:20.268624 2351 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:17:20.273830 kubelet[2351]: E0421 10:17:20.273802 2351 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:17:20.273993 kubelet[2351]: I0421 10:17:20.273967 2351 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:17:20.274044 kubelet[2351]: I0421 10:17:20.273984 2351 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:17:20.275798 kubelet[2351]: I0421 10:17:20.275766 2351 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:17:20.279755 kubelet[2351]: E0421 10:17:20.279727 2351 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:17:20.279881 kubelet[2351]: E0421 10:17:20.279760 2351 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-109-217\" not found" Apr 21 10:17:20.368627 kubelet[2351]: E0421 10:17:20.368256 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-109-217\" not found" node="172-236-109-217" Apr 21 10:17:20.372141 kubelet[2351]: E0421 10:17:20.371920 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-109-217\" not found" node="172-236-109-217" Apr 21 10:17:20.375190 kubelet[2351]: I0421 10:17:20.375175 2351 kubelet_node_status.go:75] "Attempting to register node" node="172-236-109-217" Apr 21 10:17:20.375804 kubelet[2351]: E0421 10:17:20.375765 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.109.217:6443/api/v1/nodes\": dial tcp 172.236.109.217:6443: connect: connection refused" node="172-236-109-217" Apr 21 10:17:20.377134 kubelet[2351]: E0421 10:17:20.376572 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-109-217\" not found" node="172-236-109-217" Apr 21 10:17:20.428997 kubelet[2351]: E0421 10:17:20.428893 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.109.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-109-217?timeout=10s\": dial tcp 172.236.109.217:6443: connect: connection refused" interval="400ms" Apr 21 10:17:20.527301 kubelet[2351]: I0421 10:17:20.527253 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c43bbf208cda8b6e96c667271084db95-k8s-certs\") pod \"kube-controller-manager-172-236-109-217\" (UID: \"c43bbf208cda8b6e96c667271084db95\") " pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:20.527301 kubelet[2351]: I0421 10:17:20.527297 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c43bbf208cda8b6e96c667271084db95-kubeconfig\") pod \"kube-controller-manager-172-236-109-217\" (UID: \"c43bbf208cda8b6e96c667271084db95\") " pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:20.527301 kubelet[2351]: I0421 10:17:20.527312 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c43bbf208cda8b6e96c667271084db95-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-109-217\" (UID: \"c43bbf208cda8b6e96c667271084db95\") " pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:20.527301 kubelet[2351]: I0421 10:17:20.527328 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a7c02612c3b72d8e1adf41743540166-kubeconfig\") pod \"kube-scheduler-172-236-109-217\" (UID: \"2a7c02612c3b72d8e1adf41743540166\") " pod="kube-system/kube-scheduler-172-236-109-217" Apr 21 10:17:20.527586 kubelet[2351]: I0421 10:17:20.527342 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c98a34731afc6d7f9522d53329ced902-k8s-certs\") pod \"kube-apiserver-172-236-109-217\" (UID: \"c98a34731afc6d7f9522d53329ced902\") " pod="kube-system/kube-apiserver-172-236-109-217" Apr 21 10:17:20.527586 kubelet[2351]: I0421 10:17:20.527356 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c43bbf208cda8b6e96c667271084db95-ca-certs\") pod \"kube-controller-manager-172-236-109-217\" (UID: \"c43bbf208cda8b6e96c667271084db95\") " pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:20.527586 kubelet[2351]: I0421 10:17:20.527370 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c43bbf208cda8b6e96c667271084db95-flexvolume-dir\") pod \"kube-controller-manager-172-236-109-217\" (UID: \"c43bbf208cda8b6e96c667271084db95\") " pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:20.527586 kubelet[2351]: I0421 10:17:20.527385 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c98a34731afc6d7f9522d53329ced902-ca-certs\") pod \"kube-apiserver-172-236-109-217\" (UID: \"c98a34731afc6d7f9522d53329ced902\") " pod="kube-system/kube-apiserver-172-236-109-217" Apr 21 10:17:20.527586 kubelet[2351]: I0421 10:17:20.527400 2351 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c98a34731afc6d7f9522d53329ced902-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-109-217\" (UID: \"c98a34731afc6d7f9522d53329ced902\") " pod="kube-system/kube-apiserver-172-236-109-217" Apr 21 10:17:20.577616 kubelet[2351]: I0421 10:17:20.577587 2351 kubelet_node_status.go:75] "Attempting to register node" node="172-236-109-217" Apr 21 10:17:20.577883 kubelet[2351]: E0421 10:17:20.577860 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.109.217:6443/api/v1/nodes\": dial tcp 172.236.109.217:6443: connect: connection refused" node="172-236-109-217" Apr 21 10:17:20.669640 kubelet[2351]: E0421 10:17:20.669611 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:20.670254 containerd[1580]: time="2026-04-21T10:17:20.670215551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-109-217,Uid:c43bbf208cda8b6e96c667271084db95,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:20.673107 kubelet[2351]: E0421 10:17:20.673072 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:20.673466 containerd[1580]: time="2026-04-21T10:17:20.673440344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-109-217,Uid:2a7c02612c3b72d8e1adf41743540166,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:20.678880 kubelet[2351]: E0421 10:17:20.678851 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:20.679410 containerd[1580]: time="2026-04-21T10:17:20.679334710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-109-217,Uid:c98a34731afc6d7f9522d53329ced902,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:20.830674 kubelet[2351]: E0421 10:17:20.830615 2351 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.109.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-109-217?timeout=10s\": dial tcp 172.236.109.217:6443: connect: connection refused" interval="800ms" Apr 21 10:17:20.979630 kubelet[2351]: I0421 10:17:20.979507 2351 kubelet_node_status.go:75] "Attempting to register node" node="172-236-109-217" Apr 21 10:17:20.980290 kubelet[2351]: E0421 10:17:20.980260 2351 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.109.217:6443/api/v1/nodes\": dial tcp 172.236.109.217:6443: connect: connection refused" node="172-236-109-217" Apr 21 10:17:21.124657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3394388645.mount: Deactivated successfully. Apr 21 10:17:21.153633 containerd[1580]: time="2026-04-21T10:17:21.153572604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:21.157579 containerd[1580]: time="2026-04-21T10:17:21.157530968Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:21.161600 containerd[1580]: time="2026-04-21T10:17:21.161530042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Apr 21 10:17:21.163272 containerd[1580]: time="2026-04-21T10:17:21.163226534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:17:21.163634 containerd[1580]: time="2026-04-21T10:17:21.163604834Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:21.164311 containerd[1580]: time="2026-04-21T10:17:21.164277565Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:21.168007 containerd[1580]: time="2026-04-21T10:17:21.167777738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:17:21.169049 containerd[1580]: time="2026-04-21T10:17:21.169016999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:21.171448 containerd[1580]: time="2026-04-21T10:17:21.171418822Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 497.925288ms" Apr 21 10:17:21.173512 containerd[1580]: time="2026-04-21T10:17:21.173486604Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 494.094924ms" Apr 21 10:17:21.174188 containerd[1580]: time="2026-04-21T10:17:21.174133545Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 503.837564ms" Apr 21 10:17:21.227264 kubelet[2351]: E0421 10:17:21.227042 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.236.109.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.109.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:17:21.285649 kubelet[2351]: E0421 10:17:21.284742 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.236.109.217:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-109-217&limit=500&resourceVersion=0\": dial tcp 172.236.109.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:17:21.301219 containerd[1580]: time="2026-04-21T10:17:21.300974041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:21.301219 containerd[1580]: time="2026-04-21T10:17:21.301038471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:21.301219 containerd[1580]: time="2026-04-21T10:17:21.301049611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:21.301219 containerd[1580]: time="2026-04-21T10:17:21.301136422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:21.303735 containerd[1580]: time="2026-04-21T10:17:21.303498934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:21.303735 containerd[1580]: time="2026-04-21T10:17:21.303582114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:21.303735 containerd[1580]: time="2026-04-21T10:17:21.303601014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:21.304007 containerd[1580]: time="2026-04-21T10:17:21.303696454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:21.304408 containerd[1580]: time="2026-04-21T10:17:21.304149735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:21.304408 containerd[1580]: time="2026-04-21T10:17:21.304187855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:21.304408 containerd[1580]: time="2026-04-21T10:17:21.304214415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:21.304408 containerd[1580]: time="2026-04-21T10:17:21.304285995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:21.322418 kubelet[2351]: E0421 10:17:21.322373 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.236.109.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.109.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:17:21.392083 containerd[1580]: time="2026-04-21T10:17:21.392037702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-109-217,Uid:c98a34731afc6d7f9522d53329ced902,Namespace:kube-system,Attempt:0,} returns sandbox id \"cad86ba748e79471dea54c25ae21db6f533ab195ad7448d9a7b3671ec70dfbf9\"" Apr 21 10:17:21.393497 kubelet[2351]: E0421 10:17:21.393321 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:21.397038 containerd[1580]: time="2026-04-21T10:17:21.396821697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-109-217,Uid:c43bbf208cda8b6e96c667271084db95,Namespace:kube-system,Attempt:0,} returns sandbox id \"160090c21e7824623f0e1c6f7dd034f1b5a255b9822d9fd49e96a1dd4fae70fb\"" Apr 21 10:17:21.399775 kubelet[2351]: E0421 10:17:21.399538 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:21.407050 containerd[1580]: time="2026-04-21T10:17:21.406674457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-109-217,Uid:2a7c02612c3b72d8e1adf41743540166,Namespace:kube-system,Attempt:0,} returns sandbox id \"777857b06a1e074b7dd7eac1bb02b938db1d1b9bc461da84f2ff72706780f3bf\"" Apr 21 10:17:21.407989 kubelet[2351]: E0421 10:17:21.407971 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:21.409128 containerd[1580]: time="2026-04-21T10:17:21.409083559Z" level=info msg="CreateContainer within sandbox \"160090c21e7824623f0e1c6f7dd034f1b5a255b9822d9fd49e96a1dd4fae70fb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:17:21.409537 containerd[1580]: time="2026-04-21T10:17:21.409493640Z" level=info msg="CreateContainer within sandbox \"cad86ba748e79471dea54c25ae21db6f533ab195ad7448d9a7b3671ec70dfbf9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:17:21.412238 containerd[1580]: time="2026-04-21T10:17:21.412213743Z" level=info msg="CreateContainer within sandbox \"777857b06a1e074b7dd7eac1bb02b938db1d1b9bc461da84f2ff72706780f3bf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:17:21.419173 kubelet[2351]: E0421 10:17:21.419133 2351 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.236.109.217:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.109.217:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:17:21.424349 containerd[1580]: time="2026-04-21T10:17:21.424243685Z" level=info msg="CreateContainer within sandbox \"cad86ba748e79471dea54c25ae21db6f533ab195ad7448d9a7b3671ec70dfbf9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bba4a1dd15fb60e22b7fe7d5b2e83d85790d2332eb19c57570ad3115df872591\"" Apr 21 10:17:21.425000 containerd[1580]: time="2026-04-21T10:17:21.424947205Z" level=info msg="StartContainer for \"bba4a1dd15fb60e22b7fe7d5b2e83d85790d2332eb19c57570ad3115df872591\"" Apr 21 10:17:21.427541 containerd[1580]: time="2026-04-21T10:17:21.427502398Z" level=info msg="CreateContainer within sandbox \"160090c21e7824623f0e1c6f7dd034f1b5a255b9822d9fd49e96a1dd4fae70fb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"29c4ae99107b4b61b9438a93b11e20addb1d5c7701ae1c515c665ede23d6acd9\"" Apr 21 10:17:21.428289 containerd[1580]: time="2026-04-21T10:17:21.428271199Z" level=info msg="StartContainer for \"29c4ae99107b4b61b9438a93b11e20addb1d5c7701ae1c515c665ede23d6acd9\"" Apr 21 10:17:21.429432 containerd[1580]: time="2026-04-21T10:17:21.429357050Z" level=info msg="CreateContainer within sandbox \"777857b06a1e074b7dd7eac1bb02b938db1d1b9bc461da84f2ff72706780f3bf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"11478eef17e4acbefdf0360ca3ec991441483c3f7aa91d1d3b3522bec19ddd0d\"" Apr 21 10:17:21.430571 containerd[1580]: time="2026-04-21T10:17:21.429796380Z" level=info msg="StartContainer for \"11478eef17e4acbefdf0360ca3ec991441483c3f7aa91d1d3b3522bec19ddd0d\"" Apr 21 10:17:21.539126 containerd[1580]: time="2026-04-21T10:17:21.538795619Z" level=info msg="StartContainer for \"bba4a1dd15fb60e22b7fe7d5b2e83d85790d2332eb19c57570ad3115df872591\" returns successfully" Apr 21 10:17:21.551805 containerd[1580]: time="2026-04-21T10:17:21.551776822Z" level=info msg="StartContainer for \"29c4ae99107b4b61b9438a93b11e20addb1d5c7701ae1c515c665ede23d6acd9\" returns successfully" Apr 21 10:17:21.577622 containerd[1580]: time="2026-04-21T10:17:21.576941667Z" level=info msg="StartContainer for \"11478eef17e4acbefdf0360ca3ec991441483c3f7aa91d1d3b3522bec19ddd0d\" returns successfully" Apr 21 10:17:21.783005 kubelet[2351]: I0421 10:17:21.782972 2351 kubelet_node_status.go:75] "Attempting to register node" node="172-236-109-217" Apr 21 10:17:22.267927 kubelet[2351]: E0421 10:17:22.267736 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-109-217\" not found" node="172-236-109-217" Apr 21 10:17:22.267927 kubelet[2351]: E0421 10:17:22.267850 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:22.272274 kubelet[2351]: E0421 10:17:22.272006 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-109-217\" not found" node="172-236-109-217" Apr 21 10:17:22.272274 kubelet[2351]: E0421 10:17:22.272099 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:22.275582 kubelet[2351]: E0421 10:17:22.275101 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-109-217\" not found" node="172-236-109-217" Apr 21 10:17:22.275582 kubelet[2351]: E0421 10:17:22.275193 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:22.681964 kubelet[2351]: E0421 10:17:22.681923 2351 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-236-109-217\" not found" node="172-236-109-217" Apr 21 10:17:22.831866 kubelet[2351]: I0421 10:17:22.831823 2351 kubelet_node_status.go:78] "Successfully registered node" node="172-236-109-217" Apr 21 10:17:22.831866 kubelet[2351]: E0421 10:17:22.831866 2351 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-236-109-217\": node \"172-236-109-217\" not found" Apr 21 10:17:22.851800 kubelet[2351]: E0421 10:17:22.851751 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-109-217\" not found" Apr 21 10:17:22.952740 kubelet[2351]: E0421 10:17:22.952588 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-109-217\" not found" Apr 21 10:17:23.053804 kubelet[2351]: E0421 10:17:23.053721 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-109-217\" not found" Apr 21 10:17:23.155033 kubelet[2351]: E0421 10:17:23.154977 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-109-217\" not found" Apr 21 10:17:23.255776 kubelet[2351]: E0421 10:17:23.255632 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-109-217\" not found" Apr 21 10:17:23.275937 kubelet[2351]: E0421 10:17:23.275888 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-109-217\" not found" node="172-236-109-217" Apr 21 10:17:23.276421 kubelet[2351]: E0421 10:17:23.276040 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:23.276421 kubelet[2351]: E0421 10:17:23.276318 2351 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-109-217\" not found" node="172-236-109-217" Apr 21 10:17:23.276421 kubelet[2351]: E0421 10:17:23.276415 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:23.356657 kubelet[2351]: E0421 10:17:23.356606 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-109-217\" not found" Apr 21 10:17:23.457644 kubelet[2351]: E0421 10:17:23.457589 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-109-217\" not found" Apr 21 10:17:23.558538 kubelet[2351]: E0421 10:17:23.558384 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-109-217\" not found" Apr 21 10:17:23.659155 kubelet[2351]: E0421 10:17:23.659087 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-109-217\" not found" Apr 21 10:17:23.759798 kubelet[2351]: E0421 10:17:23.759712 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-109-217\" not found" Apr 21 10:17:23.790756 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 21 10:17:23.860582 kubelet[2351]: E0421 10:17:23.860515 2351 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-109-217\" not found" Apr 21 10:17:23.927578 kubelet[2351]: I0421 10:17:23.927523 2351 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:23.934928 kubelet[2351]: I0421 10:17:23.934892 2351 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-109-217" Apr 21 10:17:23.938271 kubelet[2351]: I0421 10:17:23.937681 2351 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-109-217" Apr 21 10:17:24.207729 kubelet[2351]: I0421 10:17:24.207598 2351 apiserver.go:52] "Watching apiserver" Apr 21 10:17:24.209931 kubelet[2351]: E0421 10:17:24.209879 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:24.226007 kubelet[2351]: I0421 10:17:24.225988 2351 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:17:24.275201 kubelet[2351]: E0421 10:17:24.275144 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:24.275201 kubelet[2351]: I0421 10:17:24.275158 2351 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-109-217" Apr 21 10:17:24.279907 kubelet[2351]: E0421 10:17:24.279752 2351 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-109-217\" already exists" pod="kube-system/kube-apiserver-172-236-109-217" Apr 21 10:17:24.279907 kubelet[2351]: E0421 10:17:24.279844 2351 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:24.818818 systemd[1]: Reloading requested from client PID 2638 ('systemctl') (unit session-7.scope)... Apr 21 10:17:24.818834 systemd[1]: Reloading... Apr 21 10:17:24.897579 zram_generator::config[2676]: No configuration found. Apr 21 10:17:25.019103 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:17:25.092142 systemd[1]: Reloading finished in 272 ms. Apr 21 10:17:25.133027 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:25.152836 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:17:25.153267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:25.160432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:25.320722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:25.330083 (kubelet)[2738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:17:25.371514 kubelet[2738]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:17:25.372236 kubelet[2738]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:17:25.372236 kubelet[2738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:17:25.372236 kubelet[2738]: I0421 10:17:25.371913 2738 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:17:25.382581 kubelet[2738]: I0421 10:17:25.381486 2738 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:17:25.382581 kubelet[2738]: I0421 10:17:25.381510 2738 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:17:25.382581 kubelet[2738]: I0421 10:17:25.381779 2738 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:17:25.383237 kubelet[2738]: I0421 10:17:25.383211 2738 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:17:25.385487 kubelet[2738]: I0421 10:17:25.385463 2738 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:17:25.396863 kubelet[2738]: E0421 10:17:25.396835 2738 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:17:25.397018 kubelet[2738]: I0421 10:17:25.397005 2738 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:17:25.404480 kubelet[2738]: I0421 10:17:25.404441 2738 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:17:25.405347 kubelet[2738]: I0421 10:17:25.405323 2738 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:17:25.405727 kubelet[2738]: I0421 10:17:25.405401 2738 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-109-217","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 10:17:25.406071 kubelet[2738]: I0421 10:17:25.406053 2738 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:17:25.406193 kubelet[2738]: I0421 10:17:25.406182 2738 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:17:25.406426 kubelet[2738]: I0421 10:17:25.406415 2738 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:17:25.406733 kubelet[2738]: I0421 10:17:25.406713 2738 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:17:25.406826 kubelet[2738]: I0421 10:17:25.406813 2738 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:17:25.406961 kubelet[2738]: I0421 10:17:25.406951 2738 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:17:25.407017 kubelet[2738]: I0421 10:17:25.407009 2738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:17:25.414881 kubelet[2738]: I0421 10:17:25.414852 2738 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:17:25.415375 kubelet[2738]: I0421 10:17:25.415350 2738 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:17:25.419814 kubelet[2738]: I0421 10:17:25.419772 2738 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:17:25.419922 kubelet[2738]: I0421 10:17:25.419825 2738 server.go:1289] "Started kubelet" Apr 21 10:17:25.422156 kubelet[2738]: I0421 10:17:25.421669 2738 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:17:25.425568 kubelet[2738]: I0421 10:17:25.425523 2738 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:17:25.427114 kubelet[2738]: I0421 10:17:25.427099 2738 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:17:25.431301 kubelet[2738]: I0421 10:17:25.431259 2738 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:17:25.433799 kubelet[2738]: I0421 10:17:25.431782 2738 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:17:25.433799 kubelet[2738]: I0421 10:17:25.433146 2738 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:17:25.433799 kubelet[2738]: I0421 10:17:25.433448 2738 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:17:25.433799 kubelet[2738]: I0421 10:17:25.433520 2738 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:17:25.433799 kubelet[2738]: I0421 10:17:25.433724 2738 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:17:25.436713 kubelet[2738]: I0421 10:17:25.436671 2738 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:17:25.440010 kubelet[2738]: I0421 10:17:25.439595 2738 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:17:25.440309 kubelet[2738]: I0421 10:17:25.440267 2738 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:17:25.440912 kubelet[2738]: E0421 10:17:25.440895 2738 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:17:25.442159 kubelet[2738]: I0421 10:17:25.439985 2738 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:17:25.442275 kubelet[2738]: I0421 10:17:25.442238 2738 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:17:25.442446 kubelet[2738]: I0421 10:17:25.442329 2738 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:17:25.442512 kubelet[2738]: I0421 10:17:25.442502 2738 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:17:25.442649 kubelet[2738]: E0421 10:17:25.442627 2738 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:17:25.451206 kubelet[2738]: I0421 10:17:25.451185 2738 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:17:25.518648 kubelet[2738]: I0421 10:17:25.518606 2738 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:17:25.518648 kubelet[2738]: I0421 10:17:25.518627 2738 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:17:25.518766 kubelet[2738]: I0421 10:17:25.518666 2738 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:17:25.518822 kubelet[2738]: I0421 10:17:25.518806 2738 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:17:25.518869 kubelet[2738]: I0421 10:17:25.518822 2738 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:17:25.518869 kubelet[2738]: I0421 10:17:25.518839 2738 policy_none.go:49] "None policy: Start" Apr 21 10:17:25.518869 kubelet[2738]: I0421 10:17:25.518848 2738 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:17:25.518869 kubelet[2738]: I0421 10:17:25.518859 2738 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:17:25.518992 kubelet[2738]: I0421 10:17:25.518979 2738 state_mem.go:75] "Updated machine memory state" Apr 21 10:17:25.520405 kubelet[2738]: E0421 10:17:25.520383 2738 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:17:25.520621 kubelet[2738]: I0421 10:17:25.520563 2738 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:17:25.520621 kubelet[2738]: I0421 10:17:25.520587 2738 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:17:25.522115 kubelet[2738]: I0421 10:17:25.521974 2738 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:17:25.523516 kubelet[2738]: E0421 10:17:25.523488 2738 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:17:25.546911 kubelet[2738]: I0421 10:17:25.546867 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:25.547571 kubelet[2738]: I0421 10:17:25.547353 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-109-217" Apr 21 10:17:25.548440 kubelet[2738]: I0421 10:17:25.548424 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-109-217" Apr 21 10:17:25.552618 kubelet[2738]: E0421 10:17:25.552595 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-109-217\" already exists" pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:25.553152 kubelet[2738]: E0421 10:17:25.553137 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-109-217\" already exists" pod="kube-system/kube-scheduler-172-236-109-217" Apr 21 10:17:25.553700 kubelet[2738]: E0421 10:17:25.553686 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-109-217\" already exists" pod="kube-system/kube-apiserver-172-236-109-217" Apr 21 10:17:25.626983 kubelet[2738]: I0421 10:17:25.626659 2738 kubelet_node_status.go:75] "Attempting to register node" node="172-236-109-217" Apr 21 10:17:25.632928 kubelet[2738]: I0421 10:17:25.632889 2738 kubelet_node_status.go:124] "Node was previously registered" node="172-236-109-217" Apr 21 10:17:25.633013 kubelet[2738]: I0421 10:17:25.632970 2738 kubelet_node_status.go:78] "Successfully registered node" node="172-236-109-217" Apr 21 10:17:25.635127 kubelet[2738]: I0421 10:17:25.634852 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c98a34731afc6d7f9522d53329ced902-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-109-217\" (UID: \"c98a34731afc6d7f9522d53329ced902\") " pod="kube-system/kube-apiserver-172-236-109-217" Apr 21 10:17:25.635127 kubelet[2738]: I0421 10:17:25.634909 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c43bbf208cda8b6e96c667271084db95-flexvolume-dir\") pod \"kube-controller-manager-172-236-109-217\" (UID: \"c43bbf208cda8b6e96c667271084db95\") " pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:25.635127 kubelet[2738]: I0421 10:17:25.634933 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c43bbf208cda8b6e96c667271084db95-k8s-certs\") pod \"kube-controller-manager-172-236-109-217\" (UID: \"c43bbf208cda8b6e96c667271084db95\") " pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:25.635127 kubelet[2738]: I0421 10:17:25.634951 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c43bbf208cda8b6e96c667271084db95-kubeconfig\") pod \"kube-controller-manager-172-236-109-217\" (UID: \"c43bbf208cda8b6e96c667271084db95\") " pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:25.635127 kubelet[2738]: I0421 10:17:25.634982 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c98a34731afc6d7f9522d53329ced902-ca-certs\") pod \"kube-apiserver-172-236-109-217\" (UID: \"c98a34731afc6d7f9522d53329ced902\") " pod="kube-system/kube-apiserver-172-236-109-217" Apr 21 10:17:25.635296 kubelet[2738]: I0421 10:17:25.634998 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c98a34731afc6d7f9522d53329ced902-k8s-certs\") pod \"kube-apiserver-172-236-109-217\" (UID: \"c98a34731afc6d7f9522d53329ced902\") " pod="kube-system/kube-apiserver-172-236-109-217" Apr 21 10:17:25.635296 kubelet[2738]: I0421 10:17:25.635019 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c43bbf208cda8b6e96c667271084db95-ca-certs\") pod \"kube-controller-manager-172-236-109-217\" (UID: \"c43bbf208cda8b6e96c667271084db95\") " pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:25.635296 kubelet[2738]: I0421 10:17:25.635044 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c43bbf208cda8b6e96c667271084db95-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-109-217\" (UID: \"c43bbf208cda8b6e96c667271084db95\") " pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:25.635296 kubelet[2738]: I0421 10:17:25.635062 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a7c02612c3b72d8e1adf41743540166-kubeconfig\") pod \"kube-scheduler-172-236-109-217\" (UID: \"2a7c02612c3b72d8e1adf41743540166\") " pod="kube-system/kube-scheduler-172-236-109-217" Apr 21 10:17:25.854634 kubelet[2738]: E0421 10:17:25.854485 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:25.854634 kubelet[2738]: E0421 10:17:25.854560 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:25.855179 kubelet[2738]: E0421 10:17:25.855142 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:26.412700 kubelet[2738]: I0421 10:17:26.412610 2738 apiserver.go:52] "Watching apiserver" Apr 21 10:17:26.434339 kubelet[2738]: I0421 10:17:26.434276 2738 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:17:26.477743 kubelet[2738]: I0421 10:17:26.477474 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:26.478288 kubelet[2738]: I0421 10:17:26.478172 2738 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-109-217" Apr 21 10:17:26.478946 kubelet[2738]: E0421 10:17:26.478921 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:26.495851 kubelet[2738]: E0421 10:17:26.495794 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-109-217\" already exists" pod="kube-system/kube-apiserver-172-236-109-217" Apr 21 10:17:26.496076 kubelet[2738]: E0421 10:17:26.496037 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:26.496648 kubelet[2738]: E0421 10:17:26.496369 2738 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-109-217\" already exists" pod="kube-system/kube-controller-manager-172-236-109-217" Apr 21 10:17:26.496648 kubelet[2738]: E0421 10:17:26.496473 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:26.515176 kubelet[2738]: I0421 10:17:26.515037 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-109-217" podStartSLOduration=3.515014846 podStartE2EDuration="3.515014846s" podCreationTimestamp="2026-04-21 10:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:17:26.50402075 +0000 UTC m=+1.168833806" watchObservedRunningTime="2026-04-21 10:17:26.515014846 +0000 UTC m=+1.179827882" Apr 21 10:17:26.528159 kubelet[2738]: I0421 10:17:26.527940 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-109-217" podStartSLOduration=3.52791632 podStartE2EDuration="3.52791632s" podCreationTimestamp="2026-04-21 10:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:17:26.515263879 +0000 UTC m=+1.180076915" watchObservedRunningTime="2026-04-21 10:17:26.52791632 +0000 UTC m=+1.192729356" Apr 21 10:17:27.479200 kubelet[2738]: E0421 10:17:27.479142 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:27.479834 kubelet[2738]: E0421 10:17:27.479808 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:27.481051 kubelet[2738]: E0421 10:17:27.481008 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:30.741379 kubelet[2738]: E0421 10:17:30.741317 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:31.005427 kubelet[2738]: I0421 10:17:31.005148 2738 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:17:31.005533 containerd[1580]: time="2026-04-21T10:17:31.005430896Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:17:31.006375 kubelet[2738]: I0421 10:17:31.006071 2738 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:17:31.332363 kubelet[2738]: E0421 10:17:31.332300 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:31.344662 kubelet[2738]: I0421 10:17:31.344566 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-109-217" podStartSLOduration=8.34453601 podStartE2EDuration="8.34453601s" podCreationTimestamp="2026-04-21 10:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:17:26.528130492 +0000 UTC m=+1.192943528" watchObservedRunningTime="2026-04-21 10:17:31.34453601 +0000 UTC m=+6.009349046" Apr 21 10:17:31.486204 kubelet[2738]: E0421 10:17:31.486165 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:31.573311 kubelet[2738]: I0421 10:17:31.573233 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7f445e90-c201-4b6c-8be5-b844a9993685-kube-proxy\") pod \"kube-proxy-rzdlv\" (UID: \"7f445e90-c201-4b6c-8be5-b844a9993685\") " pod="kube-system/kube-proxy-rzdlv" Apr 21 10:17:31.573311 kubelet[2738]: I0421 10:17:31.573277 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f445e90-c201-4b6c-8be5-b844a9993685-lib-modules\") pod \"kube-proxy-rzdlv\" (UID: \"7f445e90-c201-4b6c-8be5-b844a9993685\") " pod="kube-system/kube-proxy-rzdlv" Apr 21 10:17:31.573311 kubelet[2738]: I0421 10:17:31.573295 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f445e90-c201-4b6c-8be5-b844a9993685-xtables-lock\") pod \"kube-proxy-rzdlv\" (UID: \"7f445e90-c201-4b6c-8be5-b844a9993685\") " pod="kube-system/kube-proxy-rzdlv" Apr 21 10:17:31.573311 kubelet[2738]: I0421 10:17:31.573319 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n82vv\" (UniqueName: \"kubernetes.io/projected/7f445e90-c201-4b6c-8be5-b844a9993685-kube-api-access-n82vv\") pod \"kube-proxy-rzdlv\" (UID: \"7f445e90-c201-4b6c-8be5-b844a9993685\") " pod="kube-system/kube-proxy-rzdlv" Apr 21 10:17:31.679120 kubelet[2738]: E0421 10:17:31.678708 2738 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 21 10:17:31.679120 kubelet[2738]: E0421 10:17:31.678801 2738 projected.go:194] Error preparing data for projected volume kube-api-access-n82vv for pod kube-system/kube-proxy-rzdlv: configmap "kube-root-ca.crt" not found Apr 21 10:17:31.679120 kubelet[2738]: E0421 10:17:31.678905 2738 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f445e90-c201-4b6c-8be5-b844a9993685-kube-api-access-n82vv podName:7f445e90-c201-4b6c-8be5-b844a9993685 nodeName:}" failed. No retries permitted until 2026-04-21 10:17:32.178884209 +0000 UTC m=+6.843697255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n82vv" (UniqueName: "kubernetes.io/projected/7f445e90-c201-4b6c-8be5-b844a9993685-kube-api-access-n82vv") pod "kube-proxy-rzdlv" (UID: "7f445e90-c201-4b6c-8be5-b844a9993685") : configmap "kube-root-ca.crt" not found Apr 21 10:17:32.177377 kubelet[2738]: I0421 10:17:32.177318 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a1b8de0-bf37-4246-9ad1-821a714abf6a-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-hpktp\" (UID: \"5a1b8de0-bf37-4246-9ad1-821a714abf6a\") " pod="tigera-operator/tigera-operator-6bf85f8dd-hpktp" Apr 21 10:17:32.177377 kubelet[2738]: I0421 10:17:32.177364 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmw4m\" (UniqueName: \"kubernetes.io/projected/5a1b8de0-bf37-4246-9ad1-821a714abf6a-kube-api-access-xmw4m\") pod \"tigera-operator-6bf85f8dd-hpktp\" (UID: \"5a1b8de0-bf37-4246-9ad1-821a714abf6a\") " pod="tigera-operator/tigera-operator-6bf85f8dd-hpktp" Apr 21 10:17:32.414024 kubelet[2738]: E0421 10:17:32.413989 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:32.414830 containerd[1580]: time="2026-04-21T10:17:32.414779964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rzdlv,Uid:7f445e90-c201-4b6c-8be5-b844a9993685,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:32.433350 containerd[1580]: time="2026-04-21T10:17:32.432945918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-hpktp,Uid:5a1b8de0-bf37-4246-9ad1-821a714abf6a,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:17:32.441364 containerd[1580]: time="2026-04-21T10:17:32.441215155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:32.441455 containerd[1580]: time="2026-04-21T10:17:32.441396416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:32.441455 containerd[1580]: time="2026-04-21T10:17:32.441427596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:32.441892 containerd[1580]: time="2026-04-21T10:17:32.441820829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:32.477849 containerd[1580]: time="2026-04-21T10:17:32.477731565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:32.477849 containerd[1580]: time="2026-04-21T10:17:32.477812226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:32.478193 containerd[1580]: time="2026-04-21T10:17:32.478066737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:32.478451 containerd[1580]: time="2026-04-21T10:17:32.478375839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:32.506255 containerd[1580]: time="2026-04-21T10:17:32.506221160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rzdlv,Uid:7f445e90-c201-4b6c-8be5-b844a9993685,Namespace:kube-system,Attempt:0,} returns sandbox id \"959d6a0b0ac29249c914e22bdc1843ca6a1aab600488e361af877a015c855f28\"" Apr 21 10:17:32.507867 kubelet[2738]: E0421 10:17:32.507842 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:32.513686 containerd[1580]: time="2026-04-21T10:17:32.513648290Z" level=info msg="CreateContainer within sandbox \"959d6a0b0ac29249c914e22bdc1843ca6a1aab600488e361af877a015c855f28\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:17:32.527657 containerd[1580]: time="2026-04-21T10:17:32.527617786Z" level=info msg="CreateContainer within sandbox \"959d6a0b0ac29249c914e22bdc1843ca6a1aab600488e361af877a015c855f28\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"adb5be50a3891bfcbd0c17cb7e136d35a9096a21521b3343dbb83828bd103ecc\"" Apr 21 10:17:32.529179 containerd[1580]: time="2026-04-21T10:17:32.529144497Z" level=info msg="StartContainer for \"adb5be50a3891bfcbd0c17cb7e136d35a9096a21521b3343dbb83828bd103ecc\"" Apr 21 10:17:32.561408 containerd[1580]: time="2026-04-21T10:17:32.561318507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-hpktp,Uid:5a1b8de0-bf37-4246-9ad1-821a714abf6a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"48eaeb977d1c7274cccd25bab6bc0efbf860f52656a85ac642081efedf6e370f\"" Apr 21 10:17:32.565116 containerd[1580]: time="2026-04-21T10:17:32.564910432Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:17:32.597266 containerd[1580]: time="2026-04-21T10:17:32.597218823Z" level=info msg="StartContainer for \"adb5be50a3891bfcbd0c17cb7e136d35a9096a21521b3343dbb83828bd103ecc\" returns successfully" Apr 21 10:17:33.033814 kubelet[2738]: E0421 10:17:33.033490 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:33.491802 kubelet[2738]: E0421 10:17:33.491766 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:33.493114 kubelet[2738]: E0421 10:17:33.493068 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:33.513392 kubelet[2738]: I0421 10:17:33.513286 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rzdlv" podStartSLOduration=2.513265898 podStartE2EDuration="2.513265898s" podCreationTimestamp="2026-04-21 10:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:17:33.511833268 +0000 UTC m=+8.176646304" watchObservedRunningTime="2026-04-21 10:17:33.513265898 +0000 UTC m=+8.178078944" Apr 21 10:17:33.593631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount83010211.mount: Deactivated successfully. Apr 21 10:17:34.476419 containerd[1580]: time="2026-04-21T10:17:34.476354728Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:34.477379 containerd[1580]: time="2026-04-21T10:17:34.477175863Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:17:34.479204 containerd[1580]: time="2026-04-21T10:17:34.477862857Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:34.480248 containerd[1580]: time="2026-04-21T10:17:34.479840569Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:34.480773 containerd[1580]: time="2026-04-21T10:17:34.480744244Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 1.915806992s" Apr 21 10:17:34.480807 containerd[1580]: time="2026-04-21T10:17:34.480775455Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:17:34.484938 containerd[1580]: time="2026-04-21T10:17:34.484914950Z" level=info msg="CreateContainer within sandbox \"48eaeb977d1c7274cccd25bab6bc0efbf860f52656a85ac642081efedf6e370f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:17:34.495618 containerd[1580]: time="2026-04-21T10:17:34.495018532Z" level=info msg="CreateContainer within sandbox \"48eaeb977d1c7274cccd25bab6bc0efbf860f52656a85ac642081efedf6e370f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"220f79dec80c1395a69e93c9d4a43664dfade1663c291d60029869b325436b48\"" Apr 21 10:17:34.496249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249650069.mount: Deactivated successfully. Apr 21 10:17:34.497033 containerd[1580]: time="2026-04-21T10:17:34.495983928Z" level=info msg="StartContainer for \"220f79dec80c1395a69e93c9d4a43664dfade1663c291d60029869b325436b48\"" Apr 21 10:17:34.550556 containerd[1580]: time="2026-04-21T10:17:34.550496773Z" level=info msg="StartContainer for \"220f79dec80c1395a69e93c9d4a43664dfade1663c291d60029869b325436b48\" returns successfully" Apr 21 10:17:35.506174 kubelet[2738]: I0421 10:17:35.506064 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-hpktp" podStartSLOduration=1.588477183 podStartE2EDuration="3.506048557s" podCreationTimestamp="2026-04-21 10:17:32 +0000 UTC" firstStartedPulling="2026-04-21 10:17:32.564034426 +0000 UTC m=+7.228847462" lastFinishedPulling="2026-04-21 10:17:34.4816058 +0000 UTC m=+9.146418836" observedRunningTime="2026-04-21 10:17:35.504619089 +0000 UTC m=+10.169432125" watchObservedRunningTime="2026-04-21 10:17:35.506048557 +0000 UTC m=+10.170861593" Apr 21 10:17:38.215798 update_engine[1564]: I20260421 10:17:38.214948 1564 update_attempter.cc:509] Updating boot flags... Apr 21 10:17:38.352629 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3112) Apr 21 10:17:38.478580 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3113) Apr 21 10:17:40.011765 sudo[1827]: pam_unix(sudo:session): session closed for user root Apr 21 10:17:40.113274 sshd[1823]: pam_unix(sshd:session): session closed for user core Apr 21 10:17:40.121357 systemd[1]: sshd@6-172.236.109.217:22-50.85.169.122:36900.service: Deactivated successfully. Apr 21 10:17:40.130474 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:17:40.132753 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:17:40.134500 systemd-logind[1562]: Removed session 7. Apr 21 10:17:40.746576 kubelet[2738]: E0421 10:17:40.746524 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:41.511008 kubelet[2738]: E0421 10:17:41.510951 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:42.545375 kubelet[2738]: I0421 10:17:42.545318 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9dc9eb3-1601-4e5c-b77f-f10a9d0819a6-tigera-ca-bundle\") pod \"calico-typha-775f7fc9d7-9h52n\" (UID: \"a9dc9eb3-1601-4e5c-b77f-f10a9d0819a6\") " pod="calico-system/calico-typha-775f7fc9d7-9h52n" Apr 21 10:17:42.545375 kubelet[2738]: I0421 10:17:42.545362 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a9dc9eb3-1601-4e5c-b77f-f10a9d0819a6-typha-certs\") pod \"calico-typha-775f7fc9d7-9h52n\" (UID: \"a9dc9eb3-1601-4e5c-b77f-f10a9d0819a6\") " pod="calico-system/calico-typha-775f7fc9d7-9h52n" Apr 21 10:17:42.545375 kubelet[2738]: I0421 10:17:42.545380 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6zw8\" (UniqueName: \"kubernetes.io/projected/a9dc9eb3-1601-4e5c-b77f-f10a9d0819a6-kube-api-access-p6zw8\") pod \"calico-typha-775f7fc9d7-9h52n\" (UID: \"a9dc9eb3-1601-4e5c-b77f-f10a9d0819a6\") " pod="calico-system/calico-typha-775f7fc9d7-9h52n" Apr 21 10:17:42.646511 kubelet[2738]: I0421 10:17:42.646462 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/82ff8857-702f-4cc7-a578-8437a91eaade-policysync\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.646737 kubelet[2738]: I0421 10:17:42.646526 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/82ff8857-702f-4cc7-a578-8437a91eaade-cni-log-dir\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.646737 kubelet[2738]: I0421 10:17:42.646571 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/82ff8857-702f-4cc7-a578-8437a91eaade-bpffs\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.646737 kubelet[2738]: I0421 10:17:42.646595 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82ff8857-702f-4cc7-a578-8437a91eaade-tigera-ca-bundle\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.646737 kubelet[2738]: I0421 10:17:42.646612 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82ff8857-702f-4cc7-a578-8437a91eaade-xtables-lock\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.646737 kubelet[2738]: I0421 10:17:42.646644 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82ff8857-702f-4cc7-a578-8437a91eaade-lib-modules\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.646948 kubelet[2738]: I0421 10:17:42.646671 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/82ff8857-702f-4cc7-a578-8437a91eaade-sys-fs\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.646948 kubelet[2738]: I0421 10:17:42.646704 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/82ff8857-702f-4cc7-a578-8437a91eaade-cni-net-dir\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.646948 kubelet[2738]: I0421 10:17:42.646718 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d96gq\" (UniqueName: \"kubernetes.io/projected/82ff8857-702f-4cc7-a578-8437a91eaade-kube-api-access-d96gq\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.646948 kubelet[2738]: I0421 10:17:42.646736 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/82ff8857-702f-4cc7-a578-8437a91eaade-nodeproc\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.646948 kubelet[2738]: I0421 10:17:42.646751 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/82ff8857-702f-4cc7-a578-8437a91eaade-var-lib-calico\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.647244 kubelet[2738]: I0421 10:17:42.646772 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/82ff8857-702f-4cc7-a578-8437a91eaade-cni-bin-dir\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.647244 kubelet[2738]: I0421 10:17:42.646786 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/82ff8857-702f-4cc7-a578-8437a91eaade-var-run-calico\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.647244 kubelet[2738]: I0421 10:17:42.646799 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/82ff8857-702f-4cc7-a578-8437a91eaade-node-certs\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.647244 kubelet[2738]: I0421 10:17:42.646816 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/82ff8857-702f-4cc7-a578-8437a91eaade-flexvol-driver-host\") pod \"calico-node-nqpcq\" (UID: \"82ff8857-702f-4cc7-a578-8437a91eaade\") " pod="calico-system/calico-node-nqpcq" Apr 21 10:17:42.694528 kubelet[2738]: E0421 10:17:42.694260 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjz5l" podUID="768b5922-7716-4a2f-ad9a-14196f3f0888" Apr 21 10:17:42.747356 kubelet[2738]: I0421 10:17:42.747308 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/768b5922-7716-4a2f-ad9a-14196f3f0888-registration-dir\") pod \"csi-node-driver-zjz5l\" (UID: \"768b5922-7716-4a2f-ad9a-14196f3f0888\") " pod="calico-system/csi-node-driver-zjz5l" Apr 21 10:17:42.748478 kubelet[2738]: I0421 10:17:42.748252 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85g7b\" (UniqueName: \"kubernetes.io/projected/768b5922-7716-4a2f-ad9a-14196f3f0888-kube-api-access-85g7b\") pod \"csi-node-driver-zjz5l\" (UID: \"768b5922-7716-4a2f-ad9a-14196f3f0888\") " pod="calico-system/csi-node-driver-zjz5l" Apr 21 10:17:42.748478 kubelet[2738]: I0421 10:17:42.748318 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/768b5922-7716-4a2f-ad9a-14196f3f0888-varrun\") pod \"csi-node-driver-zjz5l\" (UID: \"768b5922-7716-4a2f-ad9a-14196f3f0888\") " pod="calico-system/csi-node-driver-zjz5l" Apr 21 10:17:42.748478 kubelet[2738]: I0421 10:17:42.748365 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/768b5922-7716-4a2f-ad9a-14196f3f0888-kubelet-dir\") pod \"csi-node-driver-zjz5l\" (UID: \"768b5922-7716-4a2f-ad9a-14196f3f0888\") " pod="calico-system/csi-node-driver-zjz5l" Apr 21 10:17:42.748478 kubelet[2738]: I0421 10:17:42.748379 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/768b5922-7716-4a2f-ad9a-14196f3f0888-socket-dir\") pod \"csi-node-driver-zjz5l\" (UID: \"768b5922-7716-4a2f-ad9a-14196f3f0888\") " pod="calico-system/csi-node-driver-zjz5l" Apr 21 10:17:42.756585 kubelet[2738]: E0421 10:17:42.754806 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.758284 kubelet[2738]: W0421 10:17:42.758262 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.758366 kubelet[2738]: E0421 10:17:42.758354 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.764352 kubelet[2738]: E0421 10:17:42.764334 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.764494 kubelet[2738]: W0421 10:17:42.764480 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.764592 kubelet[2738]: E0421 10:17:42.764580 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.767350 kubelet[2738]: E0421 10:17:42.767319 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.767350 kubelet[2738]: W0421 10:17:42.767339 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.767426 kubelet[2738]: E0421 10:17:42.767357 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.801745 kubelet[2738]: E0421 10:17:42.801461 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:42.804923 containerd[1580]: time="2026-04-21T10:17:42.804894168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-775f7fc9d7-9h52n,Uid:a9dc9eb3-1601-4e5c-b77f-f10a9d0819a6,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:42.832697 containerd[1580]: time="2026-04-21T10:17:42.832579431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:42.832697 containerd[1580]: time="2026-04-21T10:17:42.832669581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:42.832878 containerd[1580]: time="2026-04-21T10:17:42.832683821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:42.832951 containerd[1580]: time="2026-04-21T10:17:42.832822892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:42.848908 kubelet[2738]: E0421 10:17:42.848828 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.848908 kubelet[2738]: W0421 10:17:42.848845 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.848908 kubelet[2738]: E0421 10:17:42.848862 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.850318 kubelet[2738]: E0421 10:17:42.849766 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.850318 kubelet[2738]: W0421 10:17:42.849780 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.850318 kubelet[2738]: E0421 10:17:42.849792 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.851930 kubelet[2738]: E0421 10:17:42.851796 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.851930 kubelet[2738]: W0421 10:17:42.851809 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.851930 kubelet[2738]: E0421 10:17:42.851824 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.852440 kubelet[2738]: E0421 10:17:42.852129 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.852440 kubelet[2738]: W0421 10:17:42.852153 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.852440 kubelet[2738]: E0421 10:17:42.852164 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.852803 kubelet[2738]: E0421 10:17:42.852792 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.852943 kubelet[2738]: W0421 10:17:42.852844 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.853009 kubelet[2738]: E0421 10:17:42.852998 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.853352 kubelet[2738]: E0421 10:17:42.853341 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.853410 kubelet[2738]: W0421 10:17:42.853400 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.853452 kubelet[2738]: E0421 10:17:42.853442 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.854253 kubelet[2738]: E0421 10:17:42.853819 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.854253 kubelet[2738]: W0421 10:17:42.853829 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.854253 kubelet[2738]: E0421 10:17:42.853838 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.854321 kubelet[2738]: E0421 10:17:42.854298 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.854321 kubelet[2738]: W0421 10:17:42.854308 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.854321 kubelet[2738]: E0421 10:17:42.854319 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.854619 kubelet[2738]: E0421 10:17:42.854604 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.854619 kubelet[2738]: W0421 10:17:42.854617 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.854680 kubelet[2738]: E0421 10:17:42.854625 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.854954 kubelet[2738]: E0421 10:17:42.854927 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.854954 kubelet[2738]: W0421 10:17:42.854942 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.854954 kubelet[2738]: E0421 10:17:42.854953 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.855385 kubelet[2738]: E0421 10:17:42.855357 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.855385 kubelet[2738]: W0421 10:17:42.855369 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.855385 kubelet[2738]: E0421 10:17:42.855378 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.855637 kubelet[2738]: E0421 10:17:42.855617 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.855637 kubelet[2738]: W0421 10:17:42.855630 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.855637 kubelet[2738]: E0421 10:17:42.855638 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.855857 kubelet[2738]: E0421 10:17:42.855834 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.855900 kubelet[2738]: W0421 10:17:42.855857 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.855900 kubelet[2738]: E0421 10:17:42.855876 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.856236 kubelet[2738]: E0421 10:17:42.856219 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.856236 kubelet[2738]: W0421 10:17:42.856232 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.856508 kubelet[2738]: E0421 10:17:42.856243 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.856759 kubelet[2738]: E0421 10:17:42.856689 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.856759 kubelet[2738]: W0421 10:17:42.856718 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.856759 kubelet[2738]: E0421 10:17:42.856726 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.856992 kubelet[2738]: E0421 10:17:42.856967 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.856992 kubelet[2738]: W0421 10:17:42.856980 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.856992 kubelet[2738]: E0421 10:17:42.856989 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.857570 kubelet[2738]: E0421 10:17:42.857264 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.857570 kubelet[2738]: W0421 10:17:42.857273 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.857570 kubelet[2738]: E0421 10:17:42.857282 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.857570 kubelet[2738]: E0421 10:17:42.857475 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.857570 kubelet[2738]: W0421 10:17:42.857482 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.857570 kubelet[2738]: E0421 10:17:42.857491 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.857820 kubelet[2738]: E0421 10:17:42.857806 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.857820 kubelet[2738]: W0421 10:17:42.857818 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.857888 kubelet[2738]: E0421 10:17:42.857826 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.858124 kubelet[2738]: E0421 10:17:42.858109 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.858124 kubelet[2738]: W0421 10:17:42.858122 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.858295 kubelet[2738]: E0421 10:17:42.858131 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.859742 kubelet[2738]: E0421 10:17:42.859723 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.859778 kubelet[2738]: W0421 10:17:42.859749 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.859778 kubelet[2738]: E0421 10:17:42.859760 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.860560 kubelet[2738]: E0421 10:17:42.860521 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.860560 kubelet[2738]: W0421 10:17:42.860536 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.860630 kubelet[2738]: E0421 10:17:42.860575 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.861666 kubelet[2738]: E0421 10:17:42.861648 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.861666 kubelet[2738]: W0421 10:17:42.861662 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.861750 kubelet[2738]: E0421 10:17:42.861673 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.863187 kubelet[2738]: E0421 10:17:42.863072 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.863187 kubelet[2738]: W0421 10:17:42.863085 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.863187 kubelet[2738]: E0421 10:17:42.863115 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.863566 kubelet[2738]: E0421 10:17:42.863491 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.863566 kubelet[2738]: W0421 10:17:42.863503 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.863566 kubelet[2738]: E0421 10:17:42.863529 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.869345 kubelet[2738]: E0421 10:17:42.869293 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:42.869345 kubelet[2738]: W0421 10:17:42.869305 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:42.869345 kubelet[2738]: E0421 10:17:42.869316 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:42.887805 containerd[1580]: time="2026-04-21T10:17:42.887425684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nqpcq,Uid:82ff8857-702f-4cc7-a578-8437a91eaade,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:42.914741 containerd[1580]: time="2026-04-21T10:17:42.914669065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-775f7fc9d7-9h52n,Uid:a9dc9eb3-1601-4e5c-b77f-f10a9d0819a6,Namespace:calico-system,Attempt:0,} returns sandbox id \"a97a3cf5af83335619f70137f410d0727706e5d8cb8e66b83cf14b4f2194af10\"" Apr 21 10:17:42.917587 kubelet[2738]: E0421 10:17:42.915842 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:42.920467 containerd[1580]: time="2026-04-21T10:17:42.920433388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:17:42.930584 containerd[1580]: time="2026-04-21T10:17:42.929920806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:42.930584 containerd[1580]: time="2026-04-21T10:17:42.929982657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:42.930584 containerd[1580]: time="2026-04-21T10:17:42.930032217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:42.930934 containerd[1580]: time="2026-04-21T10:17:42.930906520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:42.985325 containerd[1580]: time="2026-04-21T10:17:42.985007880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nqpcq,Uid:82ff8857-702f-4cc7-a578-8437a91eaade,Namespace:calico-system,Attempt:0,} returns sandbox id \"39f0c08194540c5ca66dea1339af24bec571957d90b9a68b66b93c8619c63225\"" Apr 21 10:17:43.673753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3708948431.mount: Deactivated successfully. Apr 21 10:17:44.193923 containerd[1580]: time="2026-04-21T10:17:44.193883603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:44.195191 containerd[1580]: time="2026-04-21T10:17:44.194848817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 10:17:44.195629 containerd[1580]: time="2026-04-21T10:17:44.195579890Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:44.198569 containerd[1580]: time="2026-04-21T10:17:44.198008048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:44.199765 containerd[1580]: time="2026-04-21T10:17:44.198830931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.278364043s" Apr 21 10:17:44.199765 containerd[1580]: time="2026-04-21T10:17:44.198886182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:17:44.201183 containerd[1580]: time="2026-04-21T10:17:44.201164120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:17:44.220156 containerd[1580]: time="2026-04-21T10:17:44.220126570Z" level=info msg="CreateContainer within sandbox \"a97a3cf5af83335619f70137f410d0727706e5d8cb8e66b83cf14b4f2194af10\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:17:44.233419 containerd[1580]: time="2026-04-21T10:17:44.233388730Z" level=info msg="CreateContainer within sandbox \"a97a3cf5af83335619f70137f410d0727706e5d8cb8e66b83cf14b4f2194af10\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4a830c544a63e9b6ce1b433c35e279f7bbb18fcf69cd98947ca0437bf2e42c37\"" Apr 21 10:17:44.235683 containerd[1580]: time="2026-04-21T10:17:44.235652978Z" level=info msg="StartContainer for \"4a830c544a63e9b6ce1b433c35e279f7bbb18fcf69cd98947ca0437bf2e42c37\"" Apr 21 10:17:44.316333 containerd[1580]: time="2026-04-21T10:17:44.316253425Z" level=info msg="StartContainer for \"4a830c544a63e9b6ce1b433c35e279f7bbb18fcf69cd98947ca0437bf2e42c37\" returns successfully" Apr 21 10:17:44.445277 kubelet[2738]: E0421 10:17:44.442826 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjz5l" podUID="768b5922-7716-4a2f-ad9a-14196f3f0888" Apr 21 10:17:44.529593 kubelet[2738]: E0421 10:17:44.528914 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:44.549731 kubelet[2738]: E0421 10:17:44.549695 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.549731 kubelet[2738]: W0421 10:17:44.549726 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.549859 kubelet[2738]: E0421 10:17:44.549754 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.550591 kubelet[2738]: E0421 10:17:44.550511 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.550591 kubelet[2738]: W0421 10:17:44.550533 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.550591 kubelet[2738]: E0421 10:17:44.550590 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.551515 kubelet[2738]: E0421 10:17:44.551491 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.551515 kubelet[2738]: W0421 10:17:44.551506 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.552420 kubelet[2738]: E0421 10:17:44.551517 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.555213 kubelet[2738]: E0421 10:17:44.555184 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.555213 kubelet[2738]: W0421 10:17:44.555206 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.555310 kubelet[2738]: E0421 10:17:44.555220 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.556938 kubelet[2738]: E0421 10:17:44.556913 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.557063 kubelet[2738]: W0421 10:17:44.556951 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.557063 kubelet[2738]: E0421 10:17:44.556964 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.557194 kubelet[2738]: E0421 10:17:44.557173 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.557194 kubelet[2738]: W0421 10:17:44.557188 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.557194 kubelet[2738]: E0421 10:17:44.557197 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.557495 kubelet[2738]: E0421 10:17:44.557401 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.557495 kubelet[2738]: W0421 10:17:44.557414 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.557495 kubelet[2738]: E0421 10:17:44.557422 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.558315 kubelet[2738]: E0421 10:17:44.557647 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.558315 kubelet[2738]: W0421 10:17:44.557659 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.558315 kubelet[2738]: E0421 10:17:44.557670 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.558315 kubelet[2738]: E0421 10:17:44.557923 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.558315 kubelet[2738]: W0421 10:17:44.557931 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.558315 kubelet[2738]: E0421 10:17:44.557939 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.558315 kubelet[2738]: E0421 10:17:44.558288 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.558315 kubelet[2738]: W0421 10:17:44.558296 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.558315 kubelet[2738]: E0421 10:17:44.558304 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.558703 kubelet[2738]: E0421 10:17:44.558502 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.558703 kubelet[2738]: W0421 10:17:44.558512 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.558703 kubelet[2738]: E0421 10:17:44.558520 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.558804 kubelet[2738]: E0421 10:17:44.558735 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.558804 kubelet[2738]: W0421 10:17:44.558744 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.558804 kubelet[2738]: E0421 10:17:44.558752 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.559138 kubelet[2738]: E0421 10:17:44.558958 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.559138 kubelet[2738]: W0421 10:17:44.558969 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.559138 kubelet[2738]: E0421 10:17:44.558977 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.559437 kubelet[2738]: E0421 10:17:44.559397 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.559437 kubelet[2738]: W0421 10:17:44.559405 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.559437 kubelet[2738]: E0421 10:17:44.559412 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.560521 kubelet[2738]: E0421 10:17:44.559678 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.560521 kubelet[2738]: W0421 10:17:44.559690 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.560521 kubelet[2738]: E0421 10:17:44.559698 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.571516 kubelet[2738]: E0421 10:17:44.570908 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.571516 kubelet[2738]: W0421 10:17:44.570933 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.571516 kubelet[2738]: E0421 10:17:44.570960 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.572569 kubelet[2738]: E0421 10:17:44.572407 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.572569 kubelet[2738]: W0421 10:17:44.572452 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.572569 kubelet[2738]: E0421 10:17:44.572470 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.573719 kubelet[2738]: E0421 10:17:44.573687 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.573719 kubelet[2738]: W0421 10:17:44.573712 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.574632 kubelet[2738]: E0421 10:17:44.573819 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.574632 kubelet[2738]: E0421 10:17:44.574287 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.574632 kubelet[2738]: W0421 10:17:44.574325 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.574632 kubelet[2738]: E0421 10:17:44.574336 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.574853 kubelet[2738]: E0421 10:17:44.574713 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.574853 kubelet[2738]: W0421 10:17:44.574749 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.574853 kubelet[2738]: E0421 10:17:44.574760 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.576586 kubelet[2738]: E0421 10:17:44.575102 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.576586 kubelet[2738]: W0421 10:17:44.575114 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.576586 kubelet[2738]: E0421 10:17:44.575123 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.576586 kubelet[2738]: E0421 10:17:44.575439 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.576586 kubelet[2738]: W0421 10:17:44.575449 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.576586 kubelet[2738]: E0421 10:17:44.575458 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.576586 kubelet[2738]: E0421 10:17:44.575773 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.576586 kubelet[2738]: W0421 10:17:44.575781 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.576586 kubelet[2738]: E0421 10:17:44.575789 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.576586 kubelet[2738]: E0421 10:17:44.576291 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.576932 kubelet[2738]: W0421 10:17:44.576300 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.576932 kubelet[2738]: E0421 10:17:44.576308 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.576932 kubelet[2738]: E0421 10:17:44.576612 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.576932 kubelet[2738]: W0421 10:17:44.576621 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.576932 kubelet[2738]: E0421 10:17:44.576630 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.576932 kubelet[2738]: E0421 10:17:44.576928 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.576932 kubelet[2738]: W0421 10:17:44.576937 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.577156 kubelet[2738]: E0421 10:17:44.576947 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.578141 kubelet[2738]: E0421 10:17:44.577239 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.578141 kubelet[2738]: W0421 10:17:44.577251 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.578141 kubelet[2738]: E0421 10:17:44.577286 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.578141 kubelet[2738]: E0421 10:17:44.577853 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.578141 kubelet[2738]: W0421 10:17:44.577862 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.578141 kubelet[2738]: E0421 10:17:44.577870 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.578990 kubelet[2738]: E0421 10:17:44.578365 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.578990 kubelet[2738]: W0421 10:17:44.578380 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.578990 kubelet[2738]: E0421 10:17:44.578389 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.578990 kubelet[2738]: E0421 10:17:44.578711 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.578990 kubelet[2738]: W0421 10:17:44.578737 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.578990 kubelet[2738]: E0421 10:17:44.578746 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.579412 kubelet[2738]: E0421 10:17:44.579081 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.579412 kubelet[2738]: W0421 10:17:44.579091 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.579412 kubelet[2738]: E0421 10:17:44.579122 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.579892 kubelet[2738]: E0421 10:17:44.579675 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.579892 kubelet[2738]: W0421 10:17:44.579688 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.579892 kubelet[2738]: E0421 10:17:44.579696 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.580471 kubelet[2738]: E0421 10:17:44.580441 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:44.580471 kubelet[2738]: W0421 10:17:44.580458 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:44.580574 kubelet[2738]: E0421 10:17:44.580476 2738 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:44.659634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055686783.mount: Deactivated successfully. Apr 21 10:17:44.938732 containerd[1580]: time="2026-04-21T10:17:44.938665075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:44.941978 containerd[1580]: time="2026-04-21T10:17:44.940453781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 10:17:44.941978 containerd[1580]: time="2026-04-21T10:17:44.940676683Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:44.943074 containerd[1580]: time="2026-04-21T10:17:44.943021801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:44.944206 containerd[1580]: time="2026-04-21T10:17:44.944169875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 742.670623ms" Apr 21 10:17:44.944247 containerd[1580]: time="2026-04-21T10:17:44.944205605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:17:44.947576 containerd[1580]: time="2026-04-21T10:17:44.947526148Z" level=info msg="CreateContainer within sandbox \"39f0c08194540c5ca66dea1339af24bec571957d90b9a68b66b93c8619c63225\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:17:44.960131 containerd[1580]: time="2026-04-21T10:17:44.960093074Z" level=info msg="CreateContainer within sandbox \"39f0c08194540c5ca66dea1339af24bec571957d90b9a68b66b93c8619c63225\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d4215b1c2dddd1a3112a08f1f0c0c0300f39475f8c226814722fe2d98fb65909\"" Apr 21 10:17:44.963683 containerd[1580]: time="2026-04-21T10:17:44.963630788Z" level=info msg="StartContainer for \"d4215b1c2dddd1a3112a08f1f0c0c0300f39475f8c226814722fe2d98fb65909\"" Apr 21 10:17:45.036924 containerd[1580]: time="2026-04-21T10:17:45.036820772Z" level=info msg="StartContainer for \"d4215b1c2dddd1a3112a08f1f0c0c0300f39475f8c226814722fe2d98fb65909\" returns successfully" Apr 21 10:17:45.075041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4215b1c2dddd1a3112a08f1f0c0c0300f39475f8c226814722fe2d98fb65909-rootfs.mount: Deactivated successfully. Apr 21 10:17:45.188872 containerd[1580]: time="2026-04-21T10:17:45.188734878Z" level=info msg="shim disconnected" id=d4215b1c2dddd1a3112a08f1f0c0c0300f39475f8c226814722fe2d98fb65909 namespace=k8s.io Apr 21 10:17:45.188872 containerd[1580]: time="2026-04-21T10:17:45.188780008Z" level=warning msg="cleaning up after shim disconnected" id=d4215b1c2dddd1a3112a08f1f0c0c0300f39475f8c226814722fe2d98fb65909 namespace=k8s.io Apr 21 10:17:45.188872 containerd[1580]: time="2026-04-21T10:17:45.188788758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:17:45.532696 kubelet[2738]: I0421 10:17:45.531031 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:17:45.532696 kubelet[2738]: E0421 10:17:45.531390 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:45.533940 containerd[1580]: time="2026-04-21T10:17:45.533500713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:17:45.549353 kubelet[2738]: I0421 10:17:45.549086 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-775f7fc9d7-9h52n" podStartSLOduration=2.267354204 podStartE2EDuration="3.549072919s" podCreationTimestamp="2026-04-21 10:17:42 +0000 UTC" firstStartedPulling="2026-04-21 10:17:42.919322224 +0000 UTC m=+17.584135260" lastFinishedPulling="2026-04-21 10:17:44.201040929 +0000 UTC m=+18.865853975" observedRunningTime="2026-04-21 10:17:44.556339712 +0000 UTC m=+19.221152748" watchObservedRunningTime="2026-04-21 10:17:45.549072919 +0000 UTC m=+20.213885955" Apr 21 10:17:46.443923 kubelet[2738]: E0421 10:17:46.443782 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjz5l" podUID="768b5922-7716-4a2f-ad9a-14196f3f0888" Apr 21 10:17:48.444722 kubelet[2738]: E0421 10:17:48.444529 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjz5l" podUID="768b5922-7716-4a2f-ad9a-14196f3f0888" Apr 21 10:17:49.207068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2202784505.mount: Deactivated successfully. Apr 21 10:17:49.243456 containerd[1580]: time="2026-04-21T10:17:49.243375499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:49.244645 containerd[1580]: time="2026-04-21T10:17:49.244043491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:17:49.245121 containerd[1580]: time="2026-04-21T10:17:49.245071274Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:49.246745 containerd[1580]: time="2026-04-21T10:17:49.246700399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:49.248118 containerd[1580]: time="2026-04-21T10:17:49.247517182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 3.713988759s" Apr 21 10:17:49.248118 containerd[1580]: time="2026-04-21T10:17:49.247564532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:17:49.251823 containerd[1580]: time="2026-04-21T10:17:49.251799544Z" level=info msg="CreateContainer within sandbox \"39f0c08194540c5ca66dea1339af24bec571957d90b9a68b66b93c8619c63225\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:17:49.266375 containerd[1580]: time="2026-04-21T10:17:49.266344397Z" level=info msg="CreateContainer within sandbox \"39f0c08194540c5ca66dea1339af24bec571957d90b9a68b66b93c8619c63225\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"d96b6047be15952952afdeab899af4d9ee157bdb0f5a1af59ec4cd7e8bf3c65e\"" Apr 21 10:17:49.269865 containerd[1580]: time="2026-04-21T10:17:49.268619054Z" level=info msg="StartContainer for \"d96b6047be15952952afdeab899af4d9ee157bdb0f5a1af59ec4cd7e8bf3c65e\"" Apr 21 10:17:49.341284 containerd[1580]: time="2026-04-21T10:17:49.341227248Z" level=info msg="StartContainer for \"d96b6047be15952952afdeab899af4d9ee157bdb0f5a1af59ec4cd7e8bf3c65e\" returns successfully" Apr 21 10:17:49.523535 containerd[1580]: time="2026-04-21T10:17:49.523111395Z" level=info msg="shim disconnected" id=d96b6047be15952952afdeab899af4d9ee157bdb0f5a1af59ec4cd7e8bf3c65e namespace=k8s.io Apr 21 10:17:49.523535 containerd[1580]: time="2026-04-21T10:17:49.523175265Z" level=warning msg="cleaning up after shim disconnected" id=d96b6047be15952952afdeab899af4d9ee157bdb0f5a1af59ec4cd7e8bf3c65e namespace=k8s.io Apr 21 10:17:49.523535 containerd[1580]: time="2026-04-21T10:17:49.523185365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:17:49.535839 containerd[1580]: time="2026-04-21T10:17:49.535805393Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:17:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:17:49.542642 containerd[1580]: time="2026-04-21T10:17:49.542455702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:17:50.209249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d96b6047be15952952afdeab899af4d9ee157bdb0f5a1af59ec4cd7e8bf3c65e-rootfs.mount: Deactivated successfully. Apr 21 10:17:50.443265 kubelet[2738]: E0421 10:17:50.443232 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjz5l" podUID="768b5922-7716-4a2f-ad9a-14196f3f0888" Apr 21 10:17:51.256761 containerd[1580]: time="2026-04-21T10:17:51.256697950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:51.257631 containerd[1580]: time="2026-04-21T10:17:51.257542222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:17:51.258149 containerd[1580]: time="2026-04-21T10:17:51.258104324Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:51.259844 containerd[1580]: time="2026-04-21T10:17:51.259823668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:51.260678 containerd[1580]: time="2026-04-21T10:17:51.260585880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.718103408s" Apr 21 10:17:51.260678 containerd[1580]: time="2026-04-21T10:17:51.260612980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:17:51.263749 containerd[1580]: time="2026-04-21T10:17:51.263710389Z" level=info msg="CreateContainer within sandbox \"39f0c08194540c5ca66dea1339af24bec571957d90b9a68b66b93c8619c63225\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:17:51.286259 containerd[1580]: time="2026-04-21T10:17:51.286209000Z" level=info msg="CreateContainer within sandbox \"39f0c08194540c5ca66dea1339af24bec571957d90b9a68b66b93c8619c63225\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1d96c3bfe7f26dac097a76d41675faeb35d1310fa09c5d697a13e74f01d03fa7\"" Apr 21 10:17:51.287343 containerd[1580]: time="2026-04-21T10:17:51.286766981Z" level=info msg="StartContainer for \"1d96c3bfe7f26dac097a76d41675faeb35d1310fa09c5d697a13e74f01d03fa7\"" Apr 21 10:17:51.369140 containerd[1580]: time="2026-04-21T10:17:51.369025176Z" level=info msg="StartContainer for \"1d96c3bfe7f26dac097a76d41675faeb35d1310fa09c5d697a13e74f01d03fa7\" returns successfully" Apr 21 10:17:51.878843 containerd[1580]: time="2026-04-21T10:17:51.878799729Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:17:51.905334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d96c3bfe7f26dac097a76d41675faeb35d1310fa09c5d697a13e74f01d03fa7-rootfs.mount: Deactivated successfully. Apr 21 10:17:51.907363 containerd[1580]: time="2026-04-21T10:17:51.907300507Z" level=info msg="shim disconnected" id=1d96c3bfe7f26dac097a76d41675faeb35d1310fa09c5d697a13e74f01d03fa7 namespace=k8s.io Apr 21 10:17:51.907363 containerd[1580]: time="2026-04-21T10:17:51.907356137Z" level=warning msg="cleaning up after shim disconnected" id=1d96c3bfe7f26dac097a76d41675faeb35d1310fa09c5d697a13e74f01d03fa7 namespace=k8s.io Apr 21 10:17:51.907476 containerd[1580]: time="2026-04-21T10:17:51.907365177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:17:51.972717 kubelet[2738]: I0421 10:17:51.972692 2738 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 21 10:17:52.028661 kubelet[2738]: I0421 10:17:52.028208 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gmwl\" (UniqueName: \"kubernetes.io/projected/c7979636-3496-4985-b95e-0a670546c031-kube-api-access-4gmwl\") pod \"coredns-674b8bbfcf-jqrvf\" (UID: \"c7979636-3496-4985-b95e-0a670546c031\") " pod="kube-system/coredns-674b8bbfcf-jqrvf" Apr 21 10:17:52.028661 kubelet[2738]: I0421 10:17:52.028249 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c75f2b0-116b-43c2-af35-9fd375fcc220-config-volume\") pod \"coredns-674b8bbfcf-4f7qf\" (UID: \"4c75f2b0-116b-43c2-af35-9fd375fcc220\") " pod="kube-system/coredns-674b8bbfcf-4f7qf" Apr 21 10:17:52.028661 kubelet[2738]: I0421 10:17:52.028269 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cf487f18-8688-4b2b-baea-a5fd2415ecd5-calico-apiserver-certs\") pod \"calico-apiserver-77558dd99f-hb6xz\" (UID: \"cf487f18-8688-4b2b-baea-a5fd2415ecd5\") " pod="calico-system/calico-apiserver-77558dd99f-hb6xz" Apr 21 10:17:52.028661 kubelet[2738]: I0421 10:17:52.028288 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsqm7\" (UniqueName: \"kubernetes.io/projected/cf487f18-8688-4b2b-baea-a5fd2415ecd5-kube-api-access-xsqm7\") pod \"calico-apiserver-77558dd99f-hb6xz\" (UID: \"cf487f18-8688-4b2b-baea-a5fd2415ecd5\") " pod="calico-system/calico-apiserver-77558dd99f-hb6xz" Apr 21 10:17:52.028661 kubelet[2738]: I0421 10:17:52.028303 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7979636-3496-4985-b95e-0a670546c031-config-volume\") pod \"coredns-674b8bbfcf-jqrvf\" (UID: \"c7979636-3496-4985-b95e-0a670546c031\") " pod="kube-system/coredns-674b8bbfcf-jqrvf" Apr 21 10:17:52.029170 kubelet[2738]: I0421 10:17:52.028316 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzbp5\" (UniqueName: \"kubernetes.io/projected/4c75f2b0-116b-43c2-af35-9fd375fcc220-kube-api-access-fzbp5\") pod \"coredns-674b8bbfcf-4f7qf\" (UID: \"4c75f2b0-116b-43c2-af35-9fd375fcc220\") " pod="kube-system/coredns-674b8bbfcf-4f7qf" Apr 21 10:17:52.134039 kubelet[2738]: I0421 10:17:52.129206 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7s4q\" (UniqueName: \"kubernetes.io/projected/a1c554e9-d39c-4613-be59-44522a1d3236-kube-api-access-g7s4q\") pod \"calico-apiserver-77558dd99f-m7lvk\" (UID: \"a1c554e9-d39c-4613-be59-44522a1d3236\") " pod="calico-system/calico-apiserver-77558dd99f-m7lvk" Apr 21 10:17:52.134039 kubelet[2738]: I0421 10:17:52.129259 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6d602343-b06f-4a79-9735-f86cba637f01-goldmane-key-pair\") pod \"goldmane-5b85766d88-pskfx\" (UID: \"6d602343-b06f-4a79-9735-f86cba637f01\") " pod="calico-system/goldmane-5b85766d88-pskfx" Apr 21 10:17:52.134039 kubelet[2738]: I0421 10:17:52.129285 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d602343-b06f-4a79-9735-f86cba637f01-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-pskfx\" (UID: \"6d602343-b06f-4a79-9735-f86cba637f01\") " pod="calico-system/goldmane-5b85766d88-pskfx" Apr 21 10:17:52.134039 kubelet[2738]: I0421 10:17:52.129310 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d602343-b06f-4a79-9735-f86cba637f01-config\") pod \"goldmane-5b85766d88-pskfx\" (UID: \"6d602343-b06f-4a79-9735-f86cba637f01\") " pod="calico-system/goldmane-5b85766d88-pskfx" Apr 21 10:17:52.134039 kubelet[2738]: I0421 10:17:52.129335 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e2800f64-6b87-48b6-a3a0-976e85837c6a-nginx-config\") pod \"whisker-7f9f6dd55-69dxw\" (UID: \"e2800f64-6b87-48b6-a3a0-976e85837c6a\") " pod="calico-system/whisker-7f9f6dd55-69dxw" Apr 21 10:17:52.138830 kubelet[2738]: I0421 10:17:52.129349 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2800f64-6b87-48b6-a3a0-976e85837c6a-whisker-backend-key-pair\") pod \"whisker-7f9f6dd55-69dxw\" (UID: \"e2800f64-6b87-48b6-a3a0-976e85837c6a\") " pod="calico-system/whisker-7f9f6dd55-69dxw" Apr 21 10:17:52.138830 kubelet[2738]: I0421 10:17:52.129362 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klttl\" (UniqueName: \"kubernetes.io/projected/e2800f64-6b87-48b6-a3a0-976e85837c6a-kube-api-access-klttl\") pod \"whisker-7f9f6dd55-69dxw\" (UID: \"e2800f64-6b87-48b6-a3a0-976e85837c6a\") " pod="calico-system/whisker-7f9f6dd55-69dxw" Apr 21 10:17:52.138830 kubelet[2738]: I0421 10:17:52.129375 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxddv\" (UniqueName: \"kubernetes.io/projected/6d602343-b06f-4a79-9735-f86cba637f01-kube-api-access-qxddv\") pod \"goldmane-5b85766d88-pskfx\" (UID: \"6d602343-b06f-4a79-9735-f86cba637f01\") " pod="calico-system/goldmane-5b85766d88-pskfx" Apr 21 10:17:52.138830 kubelet[2738]: I0421 10:17:52.129391 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a1c554e9-d39c-4613-be59-44522a1d3236-calico-apiserver-certs\") pod \"calico-apiserver-77558dd99f-m7lvk\" (UID: \"a1c554e9-d39c-4613-be59-44522a1d3236\") " pod="calico-system/calico-apiserver-77558dd99f-m7lvk" Apr 21 10:17:52.138830 kubelet[2738]: I0421 10:17:52.129405 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1f95f96-3533-4369-810e-aac21a6a983c-tigera-ca-bundle\") pod \"calico-kube-controllers-68558db9f8-nj78r\" (UID: \"a1f95f96-3533-4369-810e-aac21a6a983c\") " pod="calico-system/calico-kube-controllers-68558db9f8-nj78r" Apr 21 10:17:52.139009 kubelet[2738]: I0421 10:17:52.129419 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px6st\" (UniqueName: \"kubernetes.io/projected/a1f95f96-3533-4369-810e-aac21a6a983c-kube-api-access-px6st\") pod \"calico-kube-controllers-68558db9f8-nj78r\" (UID: \"a1f95f96-3533-4369-810e-aac21a6a983c\") " pod="calico-system/calico-kube-controllers-68558db9f8-nj78r" Apr 21 10:17:52.139009 kubelet[2738]: I0421 10:17:52.129450 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2800f64-6b87-48b6-a3a0-976e85837c6a-whisker-ca-bundle\") pod \"whisker-7f9f6dd55-69dxw\" (UID: \"e2800f64-6b87-48b6-a3a0-976e85837c6a\") " pod="calico-system/whisker-7f9f6dd55-69dxw" Apr 21 10:17:52.318711 containerd[1580]: time="2026-04-21T10:17:52.318161088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77558dd99f-hb6xz,Uid:cf487f18-8688-4b2b-baea-a5fd2415ecd5,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:52.332049 kubelet[2738]: E0421 10:17:52.332002 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:52.334957 kubelet[2738]: E0421 10:17:52.334918 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:52.335474 containerd[1580]: time="2026-04-21T10:17:52.335444834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4f7qf,Uid:4c75f2b0-116b-43c2-af35-9fd375fcc220,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:52.335686 containerd[1580]: time="2026-04-21T10:17:52.335660044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqrvf,Uid:c7979636-3496-4985-b95e-0a670546c031,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:52.346451 containerd[1580]: time="2026-04-21T10:17:52.345818510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f9f6dd55-69dxw,Uid:e2800f64-6b87-48b6-a3a0-976e85837c6a,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:52.346947 containerd[1580]: time="2026-04-21T10:17:52.346883643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pskfx,Uid:6d602343-b06f-4a79-9735-f86cba637f01,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:52.350166 containerd[1580]: time="2026-04-21T10:17:52.349939111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77558dd99f-m7lvk,Uid:a1c554e9-d39c-4613-be59-44522a1d3236,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:52.352539 containerd[1580]: time="2026-04-21T10:17:52.352483828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68558db9f8-nj78r,Uid:a1f95f96-3533-4369-810e-aac21a6a983c,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:52.451273 containerd[1580]: time="2026-04-21T10:17:52.451168135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjz5l,Uid:768b5922-7716-4a2f-ad9a-14196f3f0888,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:52.587222 containerd[1580]: time="2026-04-21T10:17:52.587056310Z" level=info msg="CreateContainer within sandbox \"39f0c08194540c5ca66dea1339af24bec571957d90b9a68b66b93c8619c63225\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:17:52.624592 containerd[1580]: time="2026-04-21T10:17:52.624408888Z" level=info msg="CreateContainer within sandbox \"39f0c08194540c5ca66dea1339af24bec571957d90b9a68b66b93c8619c63225\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"eb9c7bbec6c73d52548a92a04d0447bb96225350e84abba1152e08029d33938e\"" Apr 21 10:17:52.626623 containerd[1580]: time="2026-04-21T10:17:52.626594153Z" level=info msg="StartContainer for \"eb9c7bbec6c73d52548a92a04d0447bb96225350e84abba1152e08029d33938e\"" Apr 21 10:17:52.637820 containerd[1580]: time="2026-04-21T10:17:52.636986180Z" level=error msg="Failed to destroy network for sandbox \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.637820 containerd[1580]: time="2026-04-21T10:17:52.637578592Z" level=error msg="encountered an error cleaning up failed sandbox \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.637820 containerd[1580]: time="2026-04-21T10:17:52.637647162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68558db9f8-nj78r,Uid:a1f95f96-3533-4369-810e-aac21a6a983c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.638055 kubelet[2738]: E0421 10:17:52.637829 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.638055 kubelet[2738]: E0421 10:17:52.637876 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68558db9f8-nj78r" Apr 21 10:17:52.638055 kubelet[2738]: E0421 10:17:52.637895 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68558db9f8-nj78r" Apr 21 10:17:52.638927 kubelet[2738]: E0421 10:17:52.637935 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68558db9f8-nj78r_calico-system(a1f95f96-3533-4369-810e-aac21a6a983c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68558db9f8-nj78r_calico-system(a1f95f96-3533-4369-810e-aac21a6a983c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68558db9f8-nj78r" podUID="a1f95f96-3533-4369-810e-aac21a6a983c" Apr 21 10:17:52.661654 containerd[1580]: time="2026-04-21T10:17:52.661608224Z" level=error msg="Failed to destroy network for sandbox \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.662038 containerd[1580]: time="2026-04-21T10:17:52.662000035Z" level=error msg="encountered an error cleaning up failed sandbox \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.662101 containerd[1580]: time="2026-04-21T10:17:52.662068835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77558dd99f-hb6xz,Uid:cf487f18-8688-4b2b-baea-a5fd2415ecd5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.662378 kubelet[2738]: E0421 10:17:52.662345 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.662439 kubelet[2738]: E0421 10:17:52.662418 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-77558dd99f-hb6xz" Apr 21 10:17:52.662470 kubelet[2738]: E0421 10:17:52.662445 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-77558dd99f-hb6xz" Apr 21 10:17:52.662536 kubelet[2738]: E0421 10:17:52.662509 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77558dd99f-hb6xz_calico-system(cf487f18-8688-4b2b-baea-a5fd2415ecd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77558dd99f-hb6xz_calico-system(cf487f18-8688-4b2b-baea-a5fd2415ecd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-77558dd99f-hb6xz" podUID="cf487f18-8688-4b2b-baea-a5fd2415ecd5" Apr 21 10:17:52.674858 containerd[1580]: time="2026-04-21T10:17:52.674823779Z" level=error msg="Failed to destroy network for sandbox \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.676795 containerd[1580]: time="2026-04-21T10:17:52.675666511Z" level=error msg="encountered an error cleaning up failed sandbox \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.677268 containerd[1580]: time="2026-04-21T10:17:52.676915604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4f7qf,Uid:4c75f2b0-116b-43c2-af35-9fd375fcc220,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.677478 kubelet[2738]: E0421 10:17:52.677302 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.677478 kubelet[2738]: E0421 10:17:52.677376 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4f7qf" Apr 21 10:17:52.677478 kubelet[2738]: E0421 10:17:52.677414 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4f7qf" Apr 21 10:17:52.677882 kubelet[2738]: E0421 10:17:52.677466 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4f7qf_kube-system(4c75f2b0-116b-43c2-af35-9fd375fcc220)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4f7qf_kube-system(4c75f2b0-116b-43c2-af35-9fd375fcc220)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4f7qf" podUID="4c75f2b0-116b-43c2-af35-9fd375fcc220" Apr 21 10:17:52.691978 containerd[1580]: time="2026-04-21T10:17:52.691822793Z" level=error msg="Failed to destroy network for sandbox \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.692807 containerd[1580]: time="2026-04-21T10:17:52.692676295Z" level=error msg="encountered an error cleaning up failed sandbox \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.692807 containerd[1580]: time="2026-04-21T10:17:52.692724465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqrvf,Uid:c7979636-3496-4985-b95e-0a670546c031,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.692997 kubelet[2738]: E0421 10:17:52.692895 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.692997 kubelet[2738]: E0421 10:17:52.692946 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jqrvf" Apr 21 10:17:52.692997 kubelet[2738]: E0421 10:17:52.692966 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jqrvf" Apr 21 10:17:52.693095 kubelet[2738]: E0421 10:17:52.693008 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jqrvf_kube-system(c7979636-3496-4985-b95e-0a670546c031)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jqrvf_kube-system(c7979636-3496-4985-b95e-0a670546c031)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jqrvf" podUID="c7979636-3496-4985-b95e-0a670546c031" Apr 21 10:17:52.705622 containerd[1580]: time="2026-04-21T10:17:52.704906537Z" level=error msg="Failed to destroy network for sandbox \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.709113 containerd[1580]: time="2026-04-21T10:17:52.707792545Z" level=error msg="encountered an error cleaning up failed sandbox \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.709468 containerd[1580]: time="2026-04-21T10:17:52.709440369Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f9f6dd55-69dxw,Uid:e2800f64-6b87-48b6-a3a0-976e85837c6a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.709912 kubelet[2738]: E0421 10:17:52.709879 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.709982 kubelet[2738]: E0421 10:17:52.709955 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f9f6dd55-69dxw" Apr 21 10:17:52.710014 kubelet[2738]: E0421 10:17:52.709986 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f9f6dd55-69dxw" Apr 21 10:17:52.710187 kubelet[2738]: E0421 10:17:52.710086 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f9f6dd55-69dxw_calico-system(e2800f64-6b87-48b6-a3a0-976e85837c6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f9f6dd55-69dxw_calico-system(e2800f64-6b87-48b6-a3a0-976e85837c6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f9f6dd55-69dxw" podUID="e2800f64-6b87-48b6-a3a0-976e85837c6a" Apr 21 10:17:52.724444 containerd[1580]: time="2026-04-21T10:17:52.724409488Z" level=error msg="Failed to destroy network for sandbox \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.725024 containerd[1580]: time="2026-04-21T10:17:52.724999569Z" level=error msg="encountered an error cleaning up failed sandbox \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.725137 containerd[1580]: time="2026-04-21T10:17:52.725117040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pskfx,Uid:6d602343-b06f-4a79-9735-f86cba637f01,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.725386 kubelet[2738]: E0421 10:17:52.725349 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.725429 kubelet[2738]: E0421 10:17:52.725402 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-pskfx" Apr 21 10:17:52.725429 kubelet[2738]: E0421 10:17:52.725420 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-pskfx" Apr 21 10:17:52.725483 kubelet[2738]: E0421 10:17:52.725456 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-pskfx_calico-system(6d602343-b06f-4a79-9735-f86cba637f01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-pskfx_calico-system(6d602343-b06f-4a79-9735-f86cba637f01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-pskfx" podUID="6d602343-b06f-4a79-9735-f86cba637f01" Apr 21 10:17:52.727923 containerd[1580]: time="2026-04-21T10:17:52.727213235Z" level=error msg="Failed to destroy network for sandbox \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.728223 containerd[1580]: time="2026-04-21T10:17:52.727929857Z" level=error msg="Failed to destroy network for sandbox \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.730561 containerd[1580]: time="2026-04-21T10:17:52.728482519Z" level=error msg="encountered an error cleaning up failed sandbox \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.730713 containerd[1580]: time="2026-04-21T10:17:52.730689685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjz5l,Uid:768b5922-7716-4a2f-ad9a-14196f3f0888,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.730856 containerd[1580]: time="2026-04-21T10:17:52.728509749Z" level=error msg="encountered an error cleaning up failed sandbox \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.730856 containerd[1580]: time="2026-04-21T10:17:52.730830915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77558dd99f-m7lvk,Uid:a1c554e9-d39c-4613-be59-44522a1d3236,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.732276 kubelet[2738]: E0421 10:17:52.732252 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.732376 kubelet[2738]: E0421 10:17:52.732354 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-77558dd99f-m7lvk" Apr 21 10:17:52.732456 kubelet[2738]: E0421 10:17:52.732440 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-77558dd99f-m7lvk" Apr 21 10:17:52.732602 kubelet[2738]: E0421 10:17:52.732526 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77558dd99f-m7lvk_calico-system(a1c554e9-d39c-4613-be59-44522a1d3236)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77558dd99f-m7lvk_calico-system(a1c554e9-d39c-4613-be59-44522a1d3236)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-77558dd99f-m7lvk" podUID="a1c554e9-d39c-4613-be59-44522a1d3236" Apr 21 10:17:52.733174 kubelet[2738]: E0421 10:17:52.732775 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:52.733174 kubelet[2738]: E0421 10:17:52.732879 2738 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjz5l" Apr 21 10:17:52.736836 kubelet[2738]: E0421 10:17:52.736698 2738 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjz5l" Apr 21 10:17:52.736974 kubelet[2738]: E0421 10:17:52.736854 2738 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zjz5l_calico-system(768b5922-7716-4a2f-ad9a-14196f3f0888)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zjz5l_calico-system(768b5922-7716-4a2f-ad9a-14196f3f0888)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zjz5l" podUID="768b5922-7716-4a2f-ad9a-14196f3f0888" Apr 21 10:17:52.783827 containerd[1580]: time="2026-04-21T10:17:52.783786803Z" level=info msg="StartContainer for \"eb9c7bbec6c73d52548a92a04d0447bb96225350e84abba1152e08029d33938e\" returns successfully" Apr 21 10:17:53.279083 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074-shm.mount: Deactivated successfully. Apr 21 10:17:53.279257 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51-shm.mount: Deactivated successfully. Apr 21 10:17:53.561819 kubelet[2738]: I0421 10:17:53.561368 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Apr 21 10:17:53.563744 containerd[1580]: time="2026-04-21T10:17:53.562782198Z" level=info msg="StopPodSandbox for \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\"" Apr 21 10:17:53.563744 containerd[1580]: time="2026-04-21T10:17:53.563043218Z" level=info msg="Ensure that sandbox 07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58 in task-service has been cleanup successfully" Apr 21 10:17:53.565096 kubelet[2738]: I0421 10:17:53.564178 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Apr 21 10:17:53.566378 containerd[1580]: time="2026-04-21T10:17:53.565892886Z" level=info msg="StopPodSandbox for \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\"" Apr 21 10:17:53.566378 containerd[1580]: time="2026-04-21T10:17:53.566017856Z" level=info msg="Ensure that sandbox c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06 in task-service has been cleanup successfully" Apr 21 10:17:53.568597 kubelet[2738]: I0421 10:17:53.568543 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Apr 21 10:17:53.569573 containerd[1580]: time="2026-04-21T10:17:53.569270974Z" level=info msg="StopPodSandbox for \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\"" Apr 21 10:17:53.570756 containerd[1580]: time="2026-04-21T10:17:53.570729588Z" level=info msg="Ensure that sandbox fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51 in task-service has been cleanup successfully" Apr 21 10:17:53.572183 kubelet[2738]: I0421 10:17:53.572161 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Apr 21 10:17:53.574646 containerd[1580]: time="2026-04-21T10:17:53.574626458Z" level=info msg="StopPodSandbox for \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\"" Apr 21 10:17:53.575775 containerd[1580]: time="2026-04-21T10:17:53.575620290Z" level=info msg="Ensure that sandbox f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34 in task-service has been cleanup successfully" Apr 21 10:17:53.578005 kubelet[2738]: I0421 10:17:53.577971 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Apr 21 10:17:53.579904 containerd[1580]: time="2026-04-21T10:17:53.579868071Z" level=info msg="StopPodSandbox for \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\"" Apr 21 10:17:53.581046 containerd[1580]: time="2026-04-21T10:17:53.581020783Z" level=info msg="Ensure that sandbox f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd in task-service has been cleanup successfully" Apr 21 10:17:53.590445 kubelet[2738]: I0421 10:17:53.589811 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Apr 21 10:17:53.591405 containerd[1580]: time="2026-04-21T10:17:53.591371979Z" level=info msg="StopPodSandbox for \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\"" Apr 21 10:17:53.591742 containerd[1580]: time="2026-04-21T10:17:53.591718700Z" level=info msg="Ensure that sandbox 1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d in task-service has been cleanup successfully" Apr 21 10:17:53.593360 kubelet[2738]: I0421 10:17:53.593343 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Apr 21 10:17:53.595881 containerd[1580]: time="2026-04-21T10:17:53.595849681Z" level=info msg="StopPodSandbox for \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\"" Apr 21 10:17:53.596008 containerd[1580]: time="2026-04-21T10:17:53.595987221Z" level=info msg="Ensure that sandbox e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074 in task-service has been cleanup successfully" Apr 21 10:17:53.605013 kubelet[2738]: I0421 10:17:53.604992 2738 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Apr 21 10:17:53.606220 containerd[1580]: time="2026-04-21T10:17:53.606185136Z" level=info msg="StopPodSandbox for \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\"" Apr 21 10:17:53.609496 containerd[1580]: time="2026-04-21T10:17:53.609476084Z" level=info msg="Ensure that sandbox 50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a in task-service has been cleanup successfully" Apr 21 10:17:53.618964 kubelet[2738]: I0421 10:17:53.618920 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nqpcq" podStartSLOduration=3.344123302 podStartE2EDuration="11.618907138s" podCreationTimestamp="2026-04-21 10:17:42 +0000 UTC" firstStartedPulling="2026-04-21 10:17:42.986656177 +0000 UTC m=+17.651469213" lastFinishedPulling="2026-04-21 10:17:51.261440013 +0000 UTC m=+25.926253049" observedRunningTime="2026-04-21 10:17:53.618765918 +0000 UTC m=+28.283578954" watchObservedRunningTime="2026-04-21 10:17:53.618907138 +0000 UTC m=+28.283720174" Apr 21 10:17:53.692478 systemd[1]: run-containerd-runc-k8s.io-eb9c7bbec6c73d52548a92a04d0447bb96225350e84abba1152e08029d33938e-runc.3d5MCJ.mount: Deactivated successfully. Apr 21 10:17:53.957092 containerd[1580]: 2026-04-21 10:17:53.803 [INFO][3931] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Apr 21 10:17:53.957092 containerd[1580]: 2026-04-21 10:17:53.804 [INFO][3931] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" iface="eth0" netns="/var/run/netns/cni-699ba689-9172-1c59-efb9-f16f7496178a" Apr 21 10:17:53.957092 containerd[1580]: 2026-04-21 10:17:53.804 [INFO][3931] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" iface="eth0" netns="/var/run/netns/cni-699ba689-9172-1c59-efb9-f16f7496178a" Apr 21 10:17:53.957092 containerd[1580]: 2026-04-21 10:17:53.806 [INFO][3931] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" iface="eth0" netns="/var/run/netns/cni-699ba689-9172-1c59-efb9-f16f7496178a" Apr 21 10:17:53.957092 containerd[1580]: 2026-04-21 10:17:53.806 [INFO][3931] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Apr 21 10:17:53.957092 containerd[1580]: 2026-04-21 10:17:53.807 [INFO][3931] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Apr 21 10:17:53.957092 containerd[1580]: 2026-04-21 10:17:53.931 [INFO][3987] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" HandleID="k8s-pod-network.1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Workload="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:17:53.957092 containerd[1580]: 2026-04-21 10:17:53.932 [INFO][3987] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:53.957092 containerd[1580]: 2026-04-21 10:17:53.932 [INFO][3987] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:53.957092 containerd[1580]: 2026-04-21 10:17:53.939 [WARNING][3987] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" HandleID="k8s-pod-network.1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Workload="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:17:53.957092 containerd[1580]: 2026-04-21 10:17:53.939 [INFO][3987] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" HandleID="k8s-pod-network.1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Workload="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:17:53.957092 containerd[1580]: 2026-04-21 10:17:53.940 [INFO][3987] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:53.957092 containerd[1580]: 2026-04-21 10:17:53.949 [INFO][3931] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Apr 21 10:17:53.957738 containerd[1580]: time="2026-04-21T10:17:53.957712918Z" level=info msg="TearDown network for sandbox \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\" successfully" Apr 21 10:17:53.957817 containerd[1580]: time="2026-04-21T10:17:53.957802079Z" level=info msg="StopPodSandbox for \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\" returns successfully" Apr 21 10:17:53.960934 containerd[1580]: time="2026-04-21T10:17:53.960905116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pskfx,Uid:6d602343-b06f-4a79-9735-f86cba637f01,Namespace:calico-system,Attempt:1,}" Apr 21 10:17:53.962251 systemd[1]: run-netns-cni\x2d699ba689\x2d9172\x2d1c59\x2defb9\x2df16f7496178a.mount: Deactivated successfully. Apr 21 10:17:53.968232 containerd[1580]: 2026-04-21 10:17:53.819 [INFO][3877] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Apr 21 10:17:53.968232 containerd[1580]: 2026-04-21 10:17:53.820 [INFO][3877] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" iface="eth0" netns="/var/run/netns/cni-7ac42aad-7002-8a2d-ddcc-a58d60e0452c" Apr 21 10:17:53.968232 containerd[1580]: 2026-04-21 10:17:53.821 [INFO][3877] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" iface="eth0" netns="/var/run/netns/cni-7ac42aad-7002-8a2d-ddcc-a58d60e0452c" Apr 21 10:17:53.968232 containerd[1580]: 2026-04-21 10:17:53.825 [INFO][3877] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" iface="eth0" netns="/var/run/netns/cni-7ac42aad-7002-8a2d-ddcc-a58d60e0452c" Apr 21 10:17:53.968232 containerd[1580]: 2026-04-21 10:17:53.825 [INFO][3877] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Apr 21 10:17:53.968232 containerd[1580]: 2026-04-21 10:17:53.829 [INFO][3877] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Apr 21 10:17:53.968232 containerd[1580]: 2026-04-21 10:17:53.936 [INFO][3991] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" HandleID="k8s-pod-network.07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Workload="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:17:53.968232 containerd[1580]: 2026-04-21 10:17:53.937 [INFO][3991] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:53.968232 containerd[1580]: 2026-04-21 10:17:53.941 [INFO][3991] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:53.968232 containerd[1580]: 2026-04-21 10:17:53.948 [WARNING][3991] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" HandleID="k8s-pod-network.07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Workload="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:17:53.968232 containerd[1580]: 2026-04-21 10:17:53.948 [INFO][3991] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" HandleID="k8s-pod-network.07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Workload="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:17:53.968232 containerd[1580]: 2026-04-21 10:17:53.950 [INFO][3991] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:53.968232 containerd[1580]: 2026-04-21 10:17:53.957 [INFO][3877] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Apr 21 10:17:53.973762 containerd[1580]: time="2026-04-21T10:17:53.973725259Z" level=info msg="TearDown network for sandbox \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\" successfully" Apr 21 10:17:53.973869 containerd[1580]: time="2026-04-21T10:17:53.973851139Z" level=info msg="StopPodSandbox for \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\" returns successfully" Apr 21 10:17:53.975097 containerd[1580]: time="2026-04-21T10:17:53.975074382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjz5l,Uid:768b5922-7716-4a2f-ad9a-14196f3f0888,Namespace:calico-system,Attempt:1,}" Apr 21 10:17:53.975376 systemd[1]: run-netns-cni\x2d7ac42aad\x2d7002\x2d8a2d\x2dddcc\x2da58d60e0452c.mount: Deactivated successfully. Apr 21 10:17:54.020533 containerd[1580]: 2026-04-21 10:17:53.782 [INFO][3878] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Apr 21 10:17:54.020533 containerd[1580]: 2026-04-21 10:17:53.782 [INFO][3878] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" iface="eth0" netns="/var/run/netns/cni-b3368e4b-1e4f-9d3e-71ea-54533b3e8fc2" Apr 21 10:17:54.020533 containerd[1580]: 2026-04-21 10:17:53.783 [INFO][3878] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" iface="eth0" netns="/var/run/netns/cni-b3368e4b-1e4f-9d3e-71ea-54533b3e8fc2" Apr 21 10:17:54.020533 containerd[1580]: 2026-04-21 10:17:53.784 [INFO][3878] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" iface="eth0" netns="/var/run/netns/cni-b3368e4b-1e4f-9d3e-71ea-54533b3e8fc2" Apr 21 10:17:54.020533 containerd[1580]: 2026-04-21 10:17:53.784 [INFO][3878] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Apr 21 10:17:54.020533 containerd[1580]: 2026-04-21 10:17:53.784 [INFO][3878] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Apr 21 10:17:54.020533 containerd[1580]: 2026-04-21 10:17:53.976 [INFO][3981] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" HandleID="k8s-pod-network.fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:17:54.020533 containerd[1580]: 2026-04-21 10:17:53.977 [INFO][3981] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:54.020533 containerd[1580]: 2026-04-21 10:17:53.977 [INFO][3981] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:54.020533 containerd[1580]: 2026-04-21 10:17:53.985 [WARNING][3981] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" HandleID="k8s-pod-network.fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:17:54.020533 containerd[1580]: 2026-04-21 10:17:53.985 [INFO][3981] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" HandleID="k8s-pod-network.fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:17:54.020533 containerd[1580]: 2026-04-21 10:17:53.989 [INFO][3981] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:54.020533 containerd[1580]: 2026-04-21 10:17:54.004 [INFO][3878] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Apr 21 10:17:54.021668 containerd[1580]: time="2026-04-21T10:17:54.021625637Z" level=info msg="TearDown network for sandbox \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\" successfully" Apr 21 10:17:54.021771 containerd[1580]: time="2026-04-21T10:17:54.021756107Z" level=info msg="StopPodSandbox for \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\" returns successfully" Apr 21 10:17:54.022924 containerd[1580]: time="2026-04-21T10:17:54.022901710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77558dd99f-hb6xz,Uid:cf487f18-8688-4b2b-baea-a5fd2415ecd5,Namespace:calico-system,Attempt:1,}" Apr 21 10:17:54.039161 containerd[1580]: 2026-04-21 10:17:53.831 [INFO][3933] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Apr 21 10:17:54.039161 containerd[1580]: 2026-04-21 10:17:53.831 [INFO][3933] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" iface="eth0" netns="/var/run/netns/cni-1993864c-2559-809e-9ce3-65a655fd5984" Apr 21 10:17:54.039161 containerd[1580]: 2026-04-21 10:17:53.833 [INFO][3933] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" iface="eth0" netns="/var/run/netns/cni-1993864c-2559-809e-9ce3-65a655fd5984" Apr 21 10:17:54.039161 containerd[1580]: 2026-04-21 10:17:53.842 [INFO][3933] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" iface="eth0" netns="/var/run/netns/cni-1993864c-2559-809e-9ce3-65a655fd5984" Apr 21 10:17:54.039161 containerd[1580]: 2026-04-21 10:17:53.842 [INFO][3933] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Apr 21 10:17:54.039161 containerd[1580]: 2026-04-21 10:17:53.842 [INFO][3933] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Apr 21 10:17:54.039161 containerd[1580]: 2026-04-21 10:17:54.008 [INFO][3998] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" HandleID="k8s-pod-network.e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:17:54.039161 containerd[1580]: 2026-04-21 10:17:54.009 [INFO][3998] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:54.039161 containerd[1580]: 2026-04-21 10:17:54.009 [INFO][3998] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:54.039161 containerd[1580]: 2026-04-21 10:17:54.023 [WARNING][3998] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" HandleID="k8s-pod-network.e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:17:54.039161 containerd[1580]: 2026-04-21 10:17:54.023 [INFO][3998] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" HandleID="k8s-pod-network.e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:17:54.039161 containerd[1580]: 2026-04-21 10:17:54.024 [INFO][3998] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:54.039161 containerd[1580]: 2026-04-21 10:17:54.037 [INFO][3933] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Apr 21 10:17:54.039834 containerd[1580]: time="2026-04-21T10:17:54.039617770Z" level=info msg="TearDown network for sandbox \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\" successfully" Apr 21 10:17:54.039834 containerd[1580]: time="2026-04-21T10:17:54.039642120Z" level=info msg="StopPodSandbox for \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\" returns successfully" Apr 21 10:17:54.039997 kubelet[2738]: E0421 10:17:54.039919 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:54.042191 containerd[1580]: time="2026-04-21T10:17:54.042066276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4f7qf,Uid:4c75f2b0-116b-43c2-af35-9fd375fcc220,Namespace:kube-system,Attempt:1,}" Apr 21 10:17:54.086695 containerd[1580]: 2026-04-21 10:17:53.905 [INFO][3952] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Apr 21 10:17:54.086695 containerd[1580]: 2026-04-21 10:17:53.909 [INFO][3952] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" iface="eth0" netns="/var/run/netns/cni-c50ed242-f39d-f11f-2420-f089f0c42cc7" Apr 21 10:17:54.086695 containerd[1580]: 2026-04-21 10:17:53.909 [INFO][3952] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" iface="eth0" netns="/var/run/netns/cni-c50ed242-f39d-f11f-2420-f089f0c42cc7" Apr 21 10:17:54.086695 containerd[1580]: 2026-04-21 10:17:53.917 [INFO][3952] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" iface="eth0" netns="/var/run/netns/cni-c50ed242-f39d-f11f-2420-f089f0c42cc7" Apr 21 10:17:54.086695 containerd[1580]: 2026-04-21 10:17:53.917 [INFO][3952] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Apr 21 10:17:54.086695 containerd[1580]: 2026-04-21 10:17:53.917 [INFO][3952] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Apr 21 10:17:54.086695 containerd[1580]: 2026-04-21 10:17:54.030 [INFO][4024] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" HandleID="k8s-pod-network.50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:17:54.086695 containerd[1580]: 2026-04-21 10:17:54.030 [INFO][4024] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:54.086695 containerd[1580]: 2026-04-21 10:17:54.030 [INFO][4024] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:54.086695 containerd[1580]: 2026-04-21 10:17:54.052 [WARNING][4024] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" HandleID="k8s-pod-network.50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:17:54.086695 containerd[1580]: 2026-04-21 10:17:54.052 [INFO][4024] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" HandleID="k8s-pod-network.50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:17:54.086695 containerd[1580]: 2026-04-21 10:17:54.053 [INFO][4024] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:54.086695 containerd[1580]: 2026-04-21 10:17:54.066 [INFO][3952] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Apr 21 10:17:54.089846 containerd[1580]: time="2026-04-21T10:17:54.089503950Z" level=info msg="TearDown network for sandbox \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\" successfully" Apr 21 10:17:54.089846 containerd[1580]: time="2026-04-21T10:17:54.089533910Z" level=info msg="StopPodSandbox for \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\" returns successfully" Apr 21 10:17:54.090222 containerd[1580]: time="2026-04-21T10:17:54.090202602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77558dd99f-m7lvk,Uid:a1c554e9-d39c-4613-be59-44522a1d3236,Namespace:calico-system,Attempt:1,}" Apr 21 10:17:54.117822 containerd[1580]: 2026-04-21 10:17:53.882 [INFO][3900] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Apr 21 10:17:54.117822 containerd[1580]: 2026-04-21 10:17:53.888 [INFO][3900] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" iface="eth0" netns="/var/run/netns/cni-3f146d5b-8a3b-0f22-a6c0-f0364f0128ec" Apr 21 10:17:54.117822 containerd[1580]: 2026-04-21 10:17:53.889 [INFO][3900] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" iface="eth0" netns="/var/run/netns/cni-3f146d5b-8a3b-0f22-a6c0-f0364f0128ec" Apr 21 10:17:54.117822 containerd[1580]: 2026-04-21 10:17:53.890 [INFO][3900] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" iface="eth0" netns="/var/run/netns/cni-3f146d5b-8a3b-0f22-a6c0-f0364f0128ec" Apr 21 10:17:54.117822 containerd[1580]: 2026-04-21 10:17:53.890 [INFO][3900] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Apr 21 10:17:54.117822 containerd[1580]: 2026-04-21 10:17:53.890 [INFO][3900] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Apr 21 10:17:54.117822 containerd[1580]: 2026-04-21 10:17:54.036 [INFO][4016] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" HandleID="k8s-pod-network.f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Workload="172--236--109--217-k8s-whisker--7f9f6dd55--69dxw-eth0" Apr 21 10:17:54.117822 containerd[1580]: 2026-04-21 10:17:54.040 [INFO][4016] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:54.117822 containerd[1580]: 2026-04-21 10:17:54.057 [INFO][4016] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:54.117822 containerd[1580]: 2026-04-21 10:17:54.066 [WARNING][4016] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" HandleID="k8s-pod-network.f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Workload="172--236--109--217-k8s-whisker--7f9f6dd55--69dxw-eth0" Apr 21 10:17:54.117822 containerd[1580]: 2026-04-21 10:17:54.066 [INFO][4016] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" HandleID="k8s-pod-network.f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Workload="172--236--109--217-k8s-whisker--7f9f6dd55--69dxw-eth0" Apr 21 10:17:54.117822 containerd[1580]: 2026-04-21 10:17:54.069 [INFO][4016] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:54.117822 containerd[1580]: 2026-04-21 10:17:54.090 [INFO][3900] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Apr 21 10:17:54.121603 containerd[1580]: time="2026-04-21T10:17:54.117986299Z" level=info msg="TearDown network for sandbox \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\" successfully" Apr 21 10:17:54.121603 containerd[1580]: time="2026-04-21T10:17:54.118018409Z" level=info msg="StopPodSandbox for \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\" returns successfully" Apr 21 10:17:54.148810 containerd[1580]: 2026-04-21 10:17:53.837 [INFO][3890] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Apr 21 10:17:54.148810 containerd[1580]: 2026-04-21 10:17:53.847 [INFO][3890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" iface="eth0" netns="/var/run/netns/cni-79519363-161e-f9d9-ce35-399a165fa259" Apr 21 10:17:54.148810 containerd[1580]: 2026-04-21 10:17:53.847 [INFO][3890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" iface="eth0" netns="/var/run/netns/cni-79519363-161e-f9d9-ce35-399a165fa259" Apr 21 10:17:54.148810 containerd[1580]: 2026-04-21 10:17:53.847 [INFO][3890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" iface="eth0" netns="/var/run/netns/cni-79519363-161e-f9d9-ce35-399a165fa259" Apr 21 10:17:54.148810 containerd[1580]: 2026-04-21 10:17:53.847 [INFO][3890] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Apr 21 10:17:54.148810 containerd[1580]: 2026-04-21 10:17:53.847 [INFO][3890] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Apr 21 10:17:54.148810 containerd[1580]: 2026-04-21 10:17:54.047 [INFO][4001] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" HandleID="k8s-pod-network.c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Workload="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:17:54.148810 containerd[1580]: 2026-04-21 10:17:54.047 [INFO][4001] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:54.148810 containerd[1580]: 2026-04-21 10:17:54.073 [INFO][4001] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:54.148810 containerd[1580]: 2026-04-21 10:17:54.084 [WARNING][4001] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" HandleID="k8s-pod-network.c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Workload="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:17:54.148810 containerd[1580]: 2026-04-21 10:17:54.084 [INFO][4001] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" HandleID="k8s-pod-network.c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Workload="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:17:54.148810 containerd[1580]: 2026-04-21 10:17:54.085 [INFO][4001] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:54.148810 containerd[1580]: 2026-04-21 10:17:54.118 [INFO][3890] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Apr 21 10:17:54.149634 containerd[1580]: time="2026-04-21T10:17:54.149608335Z" level=info msg="TearDown network for sandbox \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\" successfully" Apr 21 10:17:54.149721 containerd[1580]: time="2026-04-21T10:17:54.149706596Z" level=info msg="StopPodSandbox for \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\" returns successfully" Apr 21 10:17:54.153673 kubelet[2738]: I0421 10:17:54.152587 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e2800f64-6b87-48b6-a3a0-976e85837c6a-nginx-config\") pod \"e2800f64-6b87-48b6-a3a0-976e85837c6a\" (UID: \"e2800f64-6b87-48b6-a3a0-976e85837c6a\") " Apr 21 10:17:54.158038 kubelet[2738]: I0421 10:17:54.157641 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klttl\" (UniqueName: \"kubernetes.io/projected/e2800f64-6b87-48b6-a3a0-976e85837c6a-kube-api-access-klttl\") pod \"e2800f64-6b87-48b6-a3a0-976e85837c6a\" (UID: \"e2800f64-6b87-48b6-a3a0-976e85837c6a\") " Apr 21 10:17:54.158038 kubelet[2738]: I0421 10:17:54.157685 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2800f64-6b87-48b6-a3a0-976e85837c6a-whisker-ca-bundle\") pod \"e2800f64-6b87-48b6-a3a0-976e85837c6a\" (UID: \"e2800f64-6b87-48b6-a3a0-976e85837c6a\") " Apr 21 10:17:54.158038 kubelet[2738]: I0421 10:17:54.157736 2738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2800f64-6b87-48b6-a3a0-976e85837c6a-whisker-backend-key-pair\") pod \"e2800f64-6b87-48b6-a3a0-976e85837c6a\" (UID: \"e2800f64-6b87-48b6-a3a0-976e85837c6a\") " Apr 21 10:17:54.160563 kubelet[2738]: I0421 10:17:54.158214 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2800f64-6b87-48b6-a3a0-976e85837c6a-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "e2800f64-6b87-48b6-a3a0-976e85837c6a" (UID: "e2800f64-6b87-48b6-a3a0-976e85837c6a"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:17:54.160964 containerd[1580]: 2026-04-21 10:17:53.909 [INFO][3921] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Apr 21 10:17:54.160964 containerd[1580]: 2026-04-21 10:17:53.912 [INFO][3921] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" iface="eth0" netns="/var/run/netns/cni-43caa795-493e-01e6-34e8-c9882c3b997e" Apr 21 10:17:54.160964 containerd[1580]: 2026-04-21 10:17:53.912 [INFO][3921] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" iface="eth0" netns="/var/run/netns/cni-43caa795-493e-01e6-34e8-c9882c3b997e" Apr 21 10:17:54.160964 containerd[1580]: 2026-04-21 10:17:53.913 [INFO][3921] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" iface="eth0" netns="/var/run/netns/cni-43caa795-493e-01e6-34e8-c9882c3b997e" Apr 21 10:17:54.160964 containerd[1580]: 2026-04-21 10:17:53.913 [INFO][3921] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Apr 21 10:17:54.160964 containerd[1580]: 2026-04-21 10:17:53.913 [INFO][3921] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Apr 21 10:17:54.160964 containerd[1580]: 2026-04-21 10:17:54.088 [INFO][4021] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" HandleID="k8s-pod-network.f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:17:54.160964 containerd[1580]: 2026-04-21 10:17:54.090 [INFO][4021] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:54.160964 containerd[1580]: 2026-04-21 10:17:54.090 [INFO][4021] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:54.160964 containerd[1580]: 2026-04-21 10:17:54.095 [WARNING][4021] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" HandleID="k8s-pod-network.f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:17:54.160964 containerd[1580]: 2026-04-21 10:17:54.095 [INFO][4021] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" HandleID="k8s-pod-network.f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:17:54.160964 containerd[1580]: 2026-04-21 10:17:54.099 [INFO][4021] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:54.160964 containerd[1580]: 2026-04-21 10:17:54.137 [INFO][3921] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Apr 21 10:17:54.163192 containerd[1580]: time="2026-04-21T10:17:54.163168598Z" level=info msg="TearDown network for sandbox \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\" successfully" Apr 21 10:17:54.163352 containerd[1580]: time="2026-04-21T10:17:54.163335228Z" level=info msg="StopPodSandbox for \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\" returns successfully" Apr 21 10:17:54.166730 kubelet[2738]: E0421 10:17:54.165458 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:54.166730 kubelet[2738]: I0421 10:17:54.166458 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2800f64-6b87-48b6-a3a0-976e85837c6a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e2800f64-6b87-48b6-a3a0-976e85837c6a" (UID: "e2800f64-6b87-48b6-a3a0-976e85837c6a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:17:54.173824 containerd[1580]: time="2026-04-21T10:17:54.173793974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68558db9f8-nj78r,Uid:a1f95f96-3533-4369-810e-aac21a6a983c,Namespace:calico-system,Attempt:1,}" Apr 21 10:17:54.174133 kubelet[2738]: I0421 10:17:54.174110 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2800f64-6b87-48b6-a3a0-976e85837c6a-kube-api-access-klttl" (OuterVolumeSpecName: "kube-api-access-klttl") pod "e2800f64-6b87-48b6-a3a0-976e85837c6a" (UID: "e2800f64-6b87-48b6-a3a0-976e85837c6a"). InnerVolumeSpecName "kube-api-access-klttl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:17:54.174373 containerd[1580]: time="2026-04-21T10:17:54.174353785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqrvf,Uid:c7979636-3496-4985-b95e-0a670546c031,Namespace:kube-system,Attempt:1,}" Apr 21 10:17:54.183569 kubelet[2738]: I0421 10:17:54.182398 2738 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2800f64-6b87-48b6-a3a0-976e85837c6a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e2800f64-6b87-48b6-a3a0-976e85837c6a" (UID: "e2800f64-6b87-48b6-a3a0-976e85837c6a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:17:54.262625 kubelet[2738]: I0421 10:17:54.258763 2738 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e2800f64-6b87-48b6-a3a0-976e85837c6a-whisker-backend-key-pair\") on node \"172-236-109-217\" DevicePath \"\"" Apr 21 10:17:54.262625 kubelet[2738]: I0421 10:17:54.258794 2738 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e2800f64-6b87-48b6-a3a0-976e85837c6a-nginx-config\") on node \"172-236-109-217\" DevicePath \"\"" Apr 21 10:17:54.262625 kubelet[2738]: I0421 10:17:54.258804 2738 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-klttl\" (UniqueName: \"kubernetes.io/projected/e2800f64-6b87-48b6-a3a0-976e85837c6a-kube-api-access-klttl\") on node \"172-236-109-217\" DevicePath \"\"" Apr 21 10:17:54.262625 kubelet[2738]: I0421 10:17:54.258814 2738 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2800f64-6b87-48b6-a3a0-976e85837c6a-whisker-ca-bundle\") on node \"172-236-109-217\" DevicePath \"\"" Apr 21 10:17:54.286495 systemd[1]: run-netns-cni\x2d79519363\x2d161e\x2df9d9\x2dce35\x2d399a165fa259.mount: Deactivated successfully. Apr 21 10:17:54.286881 systemd[1]: run-netns-cni\x2dc50ed242\x2df39d\x2df11f\x2d2420\x2df089f0c42cc7.mount: Deactivated successfully. Apr 21 10:17:54.287209 systemd[1]: run-netns-cni\x2d3f146d5b\x2d8a3b\x2d0f22\x2da6c0\x2df0364f0128ec.mount: Deactivated successfully. Apr 21 10:17:54.287393 systemd[1]: run-netns-cni\x2d43caa795\x2d493e\x2d01e6\x2d34e8\x2dc9882c3b997e.mount: Deactivated successfully. Apr 21 10:17:54.287679 systemd[1]: run-netns-cni\x2d1993864c\x2d2559\x2d809e\x2d9ce3\x2d65a655fd5984.mount: Deactivated successfully. Apr 21 10:17:54.288250 systemd[1]: run-netns-cni\x2db3368e4b\x2d1e4f\x2d9d3e\x2d71ea\x2d54533b3e8fc2.mount: Deactivated successfully. Apr 21 10:17:54.288569 systemd[1]: var-lib-kubelet-pods-e2800f64\x2d6b87\x2d48b6\x2da3a0\x2d976e85837c6a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dklttl.mount: Deactivated successfully. Apr 21 10:17:54.288904 systemd[1]: var-lib-kubelet-pods-e2800f64\x2d6b87\x2d48b6\x2da3a0\x2d976e85837c6a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:17:54.720912 systemd-networkd[1241]: cali54e2a32b412: Link UP Apr 21 10:17:54.722579 systemd-networkd[1241]: cali54e2a32b412: Gained carrier Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.253 [ERROR][4048] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.388 [INFO][4048] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--109--217-k8s-csi--node--driver--zjz5l-eth0 csi-node-driver- calico-system 768b5922-7716-4a2f-ad9a-14196f3f0888 885 0 2026-04-21 10:17:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-236-109-217 csi-node-driver-zjz5l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali54e2a32b412 [] [] }} ContainerID="848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" Namespace="calico-system" Pod="csi-node-driver-zjz5l" WorkloadEndpoint="172--236--109--217-k8s-csi--node--driver--zjz5l-" Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.388 [INFO][4048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" Namespace="calico-system" Pod="csi-node-driver-zjz5l" WorkloadEndpoint="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.579 [INFO][4234] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" HandleID="k8s-pod-network.848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" Workload="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.593 [INFO][4234] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" HandleID="k8s-pod-network.848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" Workload="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035ff20), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-109-217", "pod":"csi-node-driver-zjz5l", "timestamp":"2026-04-21 10:17:54.579006132 +0000 UTC"}, Hostname:"172-236-109-217", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001e6000)} Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.594 [INFO][4234] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.594 [INFO][4234] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.594 [INFO][4234] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-109-217' Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.597 [INFO][4234] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" host="172-236-109-217" Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.606 [INFO][4234] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-109-217" Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.612 [INFO][4234] ipam/ipam.go 526: Trying affinity for 192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.617 [INFO][4234] ipam/ipam.go 160: Attempting to load block cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.627 [INFO][4234] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.629 [INFO][4234] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" host="172-236-109-217" Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.644 [INFO][4234] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61 Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.653 [INFO][4234] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" host="172-236-109-217" Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.668 [INFO][4234] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.98.1/26] block=192.168.98.0/26 handle="k8s-pod-network.848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" host="172-236-109-217" Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.669 [INFO][4234] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.98.1/26] handle="k8s-pod-network.848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" host="172-236-109-217" Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.674 [INFO][4234] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:54.757742 containerd[1580]: 2026-04-21 10:17:54.677 [INFO][4234] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.98.1/26] IPv6=[] ContainerID="848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" HandleID="k8s-pod-network.848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" Workload="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:17:54.759794 containerd[1580]: 2026-04-21 10:17:54.700 [INFO][4048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" Namespace="calico-system" Pod="csi-node-driver-zjz5l" WorkloadEndpoint="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-csi--node--driver--zjz5l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"768b5922-7716-4a2f-ad9a-14196f3f0888", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"", Pod:"csi-node-driver-zjz5l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali54e2a32b412", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:54.759794 containerd[1580]: 2026-04-21 10:17:54.702 [INFO][4048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.1/32] ContainerID="848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" Namespace="calico-system" Pod="csi-node-driver-zjz5l" WorkloadEndpoint="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:17:54.759794 containerd[1580]: 2026-04-21 10:17:54.702 [INFO][4048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54e2a32b412 ContainerID="848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" Namespace="calico-system" Pod="csi-node-driver-zjz5l" WorkloadEndpoint="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:17:54.759794 containerd[1580]: 2026-04-21 10:17:54.724 [INFO][4048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" Namespace="calico-system" Pod="csi-node-driver-zjz5l" WorkloadEndpoint="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:17:54.759794 containerd[1580]: 2026-04-21 10:17:54.725 [INFO][4048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" Namespace="calico-system" Pod="csi-node-driver-zjz5l" WorkloadEndpoint="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-csi--node--driver--zjz5l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"768b5922-7716-4a2f-ad9a-14196f3f0888", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61", Pod:"csi-node-driver-zjz5l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali54e2a32b412", MAC:"1e:c5:13:b4:f7:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:54.759794 containerd[1580]: 2026-04-21 10:17:54.742 [INFO][4048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61" Namespace="calico-system" Pod="csi-node-driver-zjz5l" WorkloadEndpoint="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:17:54.765087 kubelet[2738]: I0421 10:17:54.764419 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b2173ef-a51e-4466-a37c-1e0a3ba565ef-whisker-backend-key-pair\") pod \"whisker-75bdd4d7fb-5wkk7\" (UID: \"7b2173ef-a51e-4466-a37c-1e0a3ba565ef\") " pod="calico-system/whisker-75bdd4d7fb-5wkk7" Apr 21 10:17:54.765087 kubelet[2738]: I0421 10:17:54.764504 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ghtd\" (UniqueName: \"kubernetes.io/projected/7b2173ef-a51e-4466-a37c-1e0a3ba565ef-kube-api-access-4ghtd\") pod \"whisker-75bdd4d7fb-5wkk7\" (UID: \"7b2173ef-a51e-4466-a37c-1e0a3ba565ef\") " pod="calico-system/whisker-75bdd4d7fb-5wkk7" Apr 21 10:17:54.765087 kubelet[2738]: I0421 10:17:54.764602 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b2173ef-a51e-4466-a37c-1e0a3ba565ef-whisker-ca-bundle\") pod \"whisker-75bdd4d7fb-5wkk7\" (UID: \"7b2173ef-a51e-4466-a37c-1e0a3ba565ef\") " pod="calico-system/whisker-75bdd4d7fb-5wkk7" Apr 21 10:17:54.765087 kubelet[2738]: I0421 10:17:54.764623 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7b2173ef-a51e-4466-a37c-1e0a3ba565ef-nginx-config\") pod \"whisker-75bdd4d7fb-5wkk7\" (UID: \"7b2173ef-a51e-4466-a37c-1e0a3ba565ef\") " pod="calico-system/whisker-75bdd4d7fb-5wkk7" Apr 21 10:17:54.794910 systemd-networkd[1241]: cali1cf085eada3: Link UP Apr 21 10:17:54.795672 systemd-networkd[1241]: cali1cf085eada3: Gained carrier Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.165 [ERROR][4038] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.231 [INFO][4038] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0 goldmane-5b85766d88- calico-system 6d602343-b06f-4a79-9735-f86cba637f01 884 0 2026-04-21 10:17:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-236-109-217 goldmane-5b85766d88-pskfx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1cf085eada3 [] [] }} ContainerID="63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" Namespace="calico-system" Pod="goldmane-5b85766d88-pskfx" WorkloadEndpoint="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-" Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.232 [INFO][4038] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" Namespace="calico-system" Pod="goldmane-5b85766d88-pskfx" WorkloadEndpoint="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.598 [INFO][4187] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" HandleID="k8s-pod-network.63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" Workload="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.629 [INFO][4187] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" HandleID="k8s-pod-network.63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" Workload="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00060c0d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-109-217", "pod":"goldmane-5b85766d88-pskfx", "timestamp":"2026-04-21 10:17:54.598413888 +0000 UTC"}, Hostname:"172-236-109-217", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000da000)} Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.629 [INFO][4187] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.669 [INFO][4187] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.670 [INFO][4187] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-109-217' Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.699 [INFO][4187] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" host="172-236-109-217" Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.714 [INFO][4187] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-109-217" Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.727 [INFO][4187] ipam/ipam.go 526: Trying affinity for 192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.730 [INFO][4187] ipam/ipam.go 160: Attempting to load block cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.736 [INFO][4187] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.737 [INFO][4187] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" host="172-236-109-217" Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.742 [INFO][4187] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.746 [INFO][4187] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" host="172-236-109-217" Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.751 [INFO][4187] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.98.2/26] block=192.168.98.0/26 handle="k8s-pod-network.63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" host="172-236-109-217" Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.751 [INFO][4187] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.98.2/26] handle="k8s-pod-network.63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" host="172-236-109-217" Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.752 [INFO][4187] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:54.843916 containerd[1580]: 2026-04-21 10:17:54.752 [INFO][4187] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.98.2/26] IPv6=[] ContainerID="63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" HandleID="k8s-pod-network.63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" Workload="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:17:54.844500 containerd[1580]: 2026-04-21 10:17:54.778 [INFO][4038] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" Namespace="calico-system" Pod="goldmane-5b85766d88-pskfx" WorkloadEndpoint="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"6d602343-b06f-4a79-9735-f86cba637f01", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"", Pod:"goldmane-5b85766d88-pskfx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.98.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1cf085eada3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:54.844500 containerd[1580]: 2026-04-21 10:17:54.778 [INFO][4038] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.2/32] ContainerID="63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" Namespace="calico-system" Pod="goldmane-5b85766d88-pskfx" WorkloadEndpoint="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:17:54.844500 containerd[1580]: 2026-04-21 10:17:54.778 [INFO][4038] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cf085eada3 ContainerID="63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" Namespace="calico-system" Pod="goldmane-5b85766d88-pskfx" WorkloadEndpoint="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:17:54.844500 containerd[1580]: 2026-04-21 10:17:54.802 [INFO][4038] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" Namespace="calico-system" Pod="goldmane-5b85766d88-pskfx" WorkloadEndpoint="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:17:54.844500 containerd[1580]: 2026-04-21 10:17:54.804 [INFO][4038] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" Namespace="calico-system" Pod="goldmane-5b85766d88-pskfx" WorkloadEndpoint="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"6d602343-b06f-4a79-9735-f86cba637f01", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb", Pod:"goldmane-5b85766d88-pskfx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.98.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1cf085eada3", MAC:"b6:88:00:dc:28:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:54.844500 containerd[1580]: 2026-04-21 10:17:54.816 [INFO][4038] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb" Namespace="calico-system" Pod="goldmane-5b85766d88-pskfx" WorkloadEndpoint="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:17:54.896776 containerd[1580]: time="2026-04-21T10:17:54.895495885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:54.896776 containerd[1580]: time="2026-04-21T10:17:54.896268577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:54.896776 containerd[1580]: time="2026-04-21T10:17:54.896300547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:54.896842 systemd-networkd[1241]: cali1fd748e199c: Link UP Apr 21 10:17:54.900466 containerd[1580]: time="2026-04-21T10:17:54.896420067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:54.900413 systemd-networkd[1241]: cali1fd748e199c: Gained carrier Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.299 [ERROR][4066] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.371 [INFO][4066] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0 coredns-674b8bbfcf- kube-system 4c75f2b0-116b-43c2-af35-9fd375fcc220 886 0 2026-04-21 10:17:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-109-217 coredns-674b8bbfcf-4f7qf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1fd748e199c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f7qf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-" Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.371 [INFO][4066] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f7qf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.631 [INFO][4229] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" HandleID="k8s-pod-network.ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.646 [INFO][4229] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" HandleID="k8s-pod-network.ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000397290), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-109-217", "pod":"coredns-674b8bbfcf-4f7qf", "timestamp":"2026-04-21 10:17:54.631925459 +0000 UTC"}, Hostname:"172-236-109-217", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005b09a0)} Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.646 [INFO][4229] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.752 [INFO][4229] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.752 [INFO][4229] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-109-217' Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.801 [INFO][4229] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" host="172-236-109-217" Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.814 [INFO][4229] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-109-217" Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.834 [INFO][4229] ipam/ipam.go 526: Trying affinity for 192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.839 [INFO][4229] ipam/ipam.go 160: Attempting to load block cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.843 [INFO][4229] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.843 [INFO][4229] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" host="172-236-109-217" Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.846 [INFO][4229] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.851 [INFO][4229] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" host="172-236-109-217" Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.859 [INFO][4229] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.98.3/26] block=192.168.98.0/26 handle="k8s-pod-network.ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" host="172-236-109-217" Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.859 [INFO][4229] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.98.3/26] handle="k8s-pod-network.ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" host="172-236-109-217" Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.859 [INFO][4229] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:54.938975 containerd[1580]: 2026-04-21 10:17:54.859 [INFO][4229] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.98.3/26] IPv6=[] ContainerID="ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" HandleID="k8s-pod-network.ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:17:54.939603 containerd[1580]: 2026-04-21 10:17:54.888 [INFO][4066] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f7qf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4c75f2b0-116b-43c2-af35-9fd375fcc220", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"", Pod:"coredns-674b8bbfcf-4f7qf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fd748e199c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:54.939603 containerd[1580]: 2026-04-21 10:17:54.888 [INFO][4066] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.3/32] ContainerID="ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f7qf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:17:54.939603 containerd[1580]: 2026-04-21 10:17:54.888 [INFO][4066] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1fd748e199c ContainerID="ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f7qf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:17:54.939603 containerd[1580]: 2026-04-21 10:17:54.898 [INFO][4066] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f7qf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:17:54.939603 containerd[1580]: 2026-04-21 10:17:54.905 [INFO][4066] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f7qf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4c75f2b0-116b-43c2-af35-9fd375fcc220", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b", Pod:"coredns-674b8bbfcf-4f7qf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fd748e199c", MAC:"da:8f:a7:ee:03:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:54.939603 containerd[1580]: 2026-04-21 10:17:54.925 [INFO][4066] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b" Namespace="kube-system" Pod="coredns-674b8bbfcf-4f7qf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:17:54.983711 systemd-networkd[1241]: cali0129b023a06: Link UP Apr 21 10:17:54.986200 containerd[1580]: time="2026-04-21T10:17:54.986044834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:54.986200 containerd[1580]: time="2026-04-21T10:17:54.986130514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:54.986200 containerd[1580]: time="2026-04-21T10:17:54.986141384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:54.987710 containerd[1580]: time="2026-04-21T10:17:54.987679527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:54.989403 systemd-networkd[1241]: cali0129b023a06: Gained carrier Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.179 [ERROR][4062] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.217 [INFO][4062] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0 calico-apiserver-77558dd99f- calico-system cf487f18-8688-4b2b-baea-a5fd2415ecd5 883 0 2026-04-21 10:17:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77558dd99f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-109-217 calico-apiserver-77558dd99f-hb6xz eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0129b023a06 [] [] }} ContainerID="eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-hb6xz" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-" Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.218 [INFO][4062] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-hb6xz" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.660 [INFO][4171] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" HandleID="k8s-pod-network.eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.729 [INFO][4171] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" HandleID="k8s-pod-network.eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e8f0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-109-217", "pod":"calico-apiserver-77558dd99f-hb6xz", "timestamp":"2026-04-21 10:17:54.660429948 +0000 UTC"}, Hostname:"172-236-109-217", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00044a6e0)} Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.729 [INFO][4171] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.860 [INFO][4171] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.861 [INFO][4171] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-109-217' Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.899 [INFO][4171] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" host="172-236-109-217" Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.929 [INFO][4171] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-109-217" Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.939 [INFO][4171] ipam/ipam.go 526: Trying affinity for 192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.942 [INFO][4171] ipam/ipam.go 160: Attempting to load block cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.944 [INFO][4171] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.944 [INFO][4171] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" host="172-236-109-217" Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.947 [INFO][4171] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.958 [INFO][4171] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" host="172-236-109-217" Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.970 [INFO][4171] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.98.4/26] block=192.168.98.0/26 handle="k8s-pod-network.eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" host="172-236-109-217" Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.970 [INFO][4171] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.98.4/26] handle="k8s-pod-network.eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" host="172-236-109-217" Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.970 [INFO][4171] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:55.019534 containerd[1580]: 2026-04-21 10:17:54.970 [INFO][4171] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.98.4/26] IPv6=[] ContainerID="eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" HandleID="k8s-pod-network.eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:17:55.020133 containerd[1580]: 2026-04-21 10:17:54.979 [INFO][4062] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-hb6xz" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0", GenerateName:"calico-apiserver-77558dd99f-", Namespace:"calico-system", SelfLink:"", UID:"cf487f18-8688-4b2b-baea-a5fd2415ecd5", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77558dd99f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"", Pod:"calico-apiserver-77558dd99f-hb6xz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0129b023a06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:55.020133 containerd[1580]: 2026-04-21 10:17:54.980 [INFO][4062] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.4/32] ContainerID="eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-hb6xz" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:17:55.020133 containerd[1580]: 2026-04-21 10:17:54.980 [INFO][4062] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0129b023a06 ContainerID="eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-hb6xz" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:17:55.020133 containerd[1580]: 2026-04-21 10:17:54.990 [INFO][4062] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-hb6xz" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:17:55.020133 containerd[1580]: 2026-04-21 10:17:54.992 [INFO][4062] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-hb6xz" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0", GenerateName:"calico-apiserver-77558dd99f-", Namespace:"calico-system", SelfLink:"", UID:"cf487f18-8688-4b2b-baea-a5fd2415ecd5", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77558dd99f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e", Pod:"calico-apiserver-77558dd99f-hb6xz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0129b023a06", MAC:"f6:fe:94:93:5e:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:55.020133 containerd[1580]: 2026-04-21 10:17:55.008 [INFO][4062] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-hb6xz" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:17:55.026312 containerd[1580]: time="2026-04-21T10:17:55.025771267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:55.026312 containerd[1580]: time="2026-04-21T10:17:55.025814957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:55.026312 containerd[1580]: time="2026-04-21T10:17:55.025825058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:55.026312 containerd[1580]: time="2026-04-21T10:17:55.025923728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:55.028251 containerd[1580]: time="2026-04-21T10:17:55.028198943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjz5l,Uid:768b5922-7716-4a2f-ad9a-14196f3f0888,Namespace:calico-system,Attempt:1,} returns sandbox id \"848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61\"" Apr 21 10:17:55.036469 containerd[1580]: time="2026-04-21T10:17:55.036333512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:17:55.048698 containerd[1580]: time="2026-04-21T10:17:55.048583060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75bdd4d7fb-5wkk7,Uid:7b2173ef-a51e-4466-a37c-1e0a3ba565ef,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:55.076763 systemd-networkd[1241]: cali7cddeac89ed: Link UP Apr 21 10:17:55.077742 systemd-networkd[1241]: cali7cddeac89ed: Gained carrier Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:54.440 [ERROR][4159] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:54.485 [INFO][4159] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0 calico-apiserver-77558dd99f- calico-system a1c554e9-d39c-4613-be59-44522a1d3236 890 0 2026-04-21 10:17:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77558dd99f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-109-217 calico-apiserver-77558dd99f-m7lvk eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali7cddeac89ed [] [] }} ContainerID="703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-m7lvk" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-" Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:54.485 [INFO][4159] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-m7lvk" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:54.842 [INFO][4242] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" HandleID="k8s-pod-network.703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:54.882 [INFO][4242] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" HandleID="k8s-pod-network.703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eaed0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-109-217", "pod":"calico-apiserver-77558dd99f-m7lvk", "timestamp":"2026-04-21 10:17:54.842143657 +0000 UTC"}, Hostname:"172-236-109-217", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002ba580)} Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:54.882 [INFO][4242] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:54.974 [INFO][4242] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:54.974 [INFO][4242] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-109-217' Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:55.001 [INFO][4242] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" host="172-236-109-217" Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:55.013 [INFO][4242] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-109-217" Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:55.041 [INFO][4242] ipam/ipam.go 526: Trying affinity for 192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:55.043 [INFO][4242] ipam/ipam.go 160: Attempting to load block cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:55.045 [INFO][4242] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:55.045 [INFO][4242] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" host="172-236-109-217" Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:55.049 [INFO][4242] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:55.052 [INFO][4242] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" host="172-236-109-217" Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:55.059 [INFO][4242] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.98.5/26] block=192.168.98.0/26 handle="k8s-pod-network.703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" host="172-236-109-217" Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:55.059 [INFO][4242] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.98.5/26] handle="k8s-pod-network.703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" host="172-236-109-217" Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:55.059 [INFO][4242] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:55.126409 containerd[1580]: 2026-04-21 10:17:55.059 [INFO][4242] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.98.5/26] IPv6=[] ContainerID="703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" HandleID="k8s-pod-network.703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:17:55.126971 containerd[1580]: 2026-04-21 10:17:55.066 [INFO][4159] cni-plugin/k8s.go 418: Populated endpoint ContainerID="703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-m7lvk" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0", GenerateName:"calico-apiserver-77558dd99f-", Namespace:"calico-system", SelfLink:"", UID:"a1c554e9-d39c-4613-be59-44522a1d3236", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77558dd99f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"", Pod:"calico-apiserver-77558dd99f-m7lvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7cddeac89ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:55.126971 containerd[1580]: 2026-04-21 10:17:55.068 [INFO][4159] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.5/32] ContainerID="703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-m7lvk" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:17:55.126971 containerd[1580]: 2026-04-21 10:17:55.069 [INFO][4159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cddeac89ed ContainerID="703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-m7lvk" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:17:55.126971 containerd[1580]: 2026-04-21 10:17:55.081 [INFO][4159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-m7lvk" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:17:55.126971 containerd[1580]: 2026-04-21 10:17:55.089 [INFO][4159] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-m7lvk" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0", GenerateName:"calico-apiserver-77558dd99f-", Namespace:"calico-system", SelfLink:"", UID:"a1c554e9-d39c-4613-be59-44522a1d3236", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77558dd99f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f", Pod:"calico-apiserver-77558dd99f-m7lvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7cddeac89ed", MAC:"e6:99:c4:ad:44:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:55.126971 containerd[1580]: 2026-04-21 10:17:55.107 [INFO][4159] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f" Namespace="calico-system" Pod="calico-apiserver-77558dd99f-m7lvk" WorkloadEndpoint="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:17:55.144952 containerd[1580]: time="2026-04-21T10:17:55.144733394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pskfx,Uid:6d602343-b06f-4a79-9735-f86cba637f01,Namespace:calico-system,Attempt:1,} returns sandbox id \"63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb\"" Apr 21 10:17:55.145818 containerd[1580]: time="2026-04-21T10:17:55.143950623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:55.145818 containerd[1580]: time="2026-04-21T10:17:55.144048613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:55.145818 containerd[1580]: time="2026-04-21T10:17:55.144059663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:55.145818 containerd[1580]: time="2026-04-21T10:17:55.144165483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:55.179019 systemd-networkd[1241]: calia599fba41ae: Link UP Apr 21 10:17:55.179244 systemd-networkd[1241]: calia599fba41ae: Gained carrier Apr 21 10:17:55.228059 containerd[1580]: time="2026-04-21T10:17:55.227759257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4f7qf,Uid:4c75f2b0-116b-43c2-af35-9fd375fcc220,Namespace:kube-system,Attempt:1,} returns sandbox id \"ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b\"" Apr 21 10:17:55.230113 kubelet[2738]: E0421 10:17:55.230060 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:54.601 [ERROR][4216] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:54.648 [INFO][4216] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0 calico-kube-controllers-68558db9f8- calico-system a1f95f96-3533-4369-810e-aac21a6a983c 887 0 2026-04-21 10:17:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68558db9f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-236-109-217 calico-kube-controllers-68558db9f8-nj78r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia599fba41ae [] [] }} ContainerID="c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" Namespace="calico-system" Pod="calico-kube-controllers-68558db9f8-nj78r" WorkloadEndpoint="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-" Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:54.649 [INFO][4216] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" Namespace="calico-system" Pod="calico-kube-controllers-68558db9f8-nj78r" WorkloadEndpoint="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:54.919 [INFO][4270] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" HandleID="k8s-pod-network.c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" Workload="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:54.936 [INFO][4270] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" HandleID="k8s-pod-network.c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" Workload="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b91a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-109-217", "pod":"calico-kube-controllers-68558db9f8-nj78r", "timestamp":"2026-04-21 10:17:54.919140563 +0000 UTC"}, Hostname:"172-236-109-217", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000186dc0)} Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:54.945 [INFO][4270] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:55.059 [INFO][4270] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:55.059 [INFO][4270] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-109-217' Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:55.107 [INFO][4270] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" host="172-236-109-217" Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:55.118 [INFO][4270] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-109-217" Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:55.137 [INFO][4270] ipam/ipam.go 526: Trying affinity for 192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:55.139 [INFO][4270] ipam/ipam.go 160: Attempting to load block cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:55.141 [INFO][4270] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:55.141 [INFO][4270] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" host="172-236-109-217" Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:55.144 [INFO][4270] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:55.153 [INFO][4270] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" host="172-236-109-217" Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:55.165 [INFO][4270] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.98.6/26] block=192.168.98.0/26 handle="k8s-pod-network.c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" host="172-236-109-217" Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:55.165 [INFO][4270] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.98.6/26] handle="k8s-pod-network.c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" host="172-236-109-217" Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:55.165 [INFO][4270] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:55.235815 containerd[1580]: 2026-04-21 10:17:55.165 [INFO][4270] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.98.6/26] IPv6=[] ContainerID="c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" HandleID="k8s-pod-network.c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" Workload="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:17:55.238055 containerd[1580]: 2026-04-21 10:17:55.175 [INFO][4216] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" Namespace="calico-system" Pod="calico-kube-controllers-68558db9f8-nj78r" WorkloadEndpoint="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0", GenerateName:"calico-kube-controllers-68558db9f8-", Namespace:"calico-system", SelfLink:"", UID:"a1f95f96-3533-4369-810e-aac21a6a983c", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68558db9f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"", Pod:"calico-kube-controllers-68558db9f8-nj78r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia599fba41ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:55.238055 containerd[1580]: 2026-04-21 10:17:55.175 [INFO][4216] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.6/32] ContainerID="c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" Namespace="calico-system" Pod="calico-kube-controllers-68558db9f8-nj78r" WorkloadEndpoint="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:17:55.238055 containerd[1580]: 2026-04-21 10:17:55.175 [INFO][4216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia599fba41ae ContainerID="c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" Namespace="calico-system" Pod="calico-kube-controllers-68558db9f8-nj78r" WorkloadEndpoint="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:17:55.238055 containerd[1580]: 2026-04-21 10:17:55.180 [INFO][4216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" Namespace="calico-system" Pod="calico-kube-controllers-68558db9f8-nj78r" WorkloadEndpoint="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:17:55.238055 containerd[1580]: 2026-04-21 10:17:55.180 [INFO][4216] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" Namespace="calico-system" Pod="calico-kube-controllers-68558db9f8-nj78r" WorkloadEndpoint="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0", GenerateName:"calico-kube-controllers-68558db9f8-", Namespace:"calico-system", SelfLink:"", UID:"a1f95f96-3533-4369-810e-aac21a6a983c", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68558db9f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f", Pod:"calico-kube-controllers-68558db9f8-nj78r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia599fba41ae", MAC:"ba:33:ec:da:cd:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:55.238055 containerd[1580]: 2026-04-21 10:17:55.212 [INFO][4216] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f" Namespace="calico-system" Pod="calico-kube-controllers-68558db9f8-nj78r" WorkloadEndpoint="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:17:55.245485 containerd[1580]: time="2026-04-21T10:17:55.244795826Z" level=info msg="CreateContainer within sandbox \"ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:17:55.288605 containerd[1580]: time="2026-04-21T10:17:55.282848555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:55.288605 containerd[1580]: time="2026-04-21T10:17:55.282910496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:55.288605 containerd[1580]: time="2026-04-21T10:17:55.282923886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:55.288605 containerd[1580]: time="2026-04-21T10:17:55.283024326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:55.288694 systemd-networkd[1241]: cali24584b69f41: Link UP Apr 21 10:17:55.291560 systemd-networkd[1241]: cali24584b69f41: Gained carrier Apr 21 10:17:55.314582 containerd[1580]: time="2026-04-21T10:17:55.313600407Z" level=info msg="CreateContainer within sandbox \"ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"18606a53d43b8eb87f82b894341f042b7c6037cc142aed1137582024203e82e9\"" Apr 21 10:17:55.334784 containerd[1580]: time="2026-04-21T10:17:55.334747076Z" level=info msg="StartContainer for \"18606a53d43b8eb87f82b894341f042b7c6037cc142aed1137582024203e82e9\"" Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:54.650 [ERROR][4193] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:54.751 [INFO][4193] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0 coredns-674b8bbfcf- kube-system c7979636-3496-4985-b95e-0a670546c031 891 0 2026-04-21 10:17:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-109-217 coredns-674b8bbfcf-jqrvf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali24584b69f41 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqrvf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-" Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:54.752 [INFO][4193] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqrvf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:54.938 [INFO][4287] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" HandleID="k8s-pod-network.eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:54.961 [INFO][4287] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" HandleID="k8s-pod-network.eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000336b50), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-109-217", "pod":"coredns-674b8bbfcf-jqrvf", "timestamp":"2026-04-21 10:17:54.938532369 +0000 UTC"}, Hostname:"172-236-109-217", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000385b80)} Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:54.961 [INFO][4287] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:55.169 [INFO][4287] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:55.169 [INFO][4287] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-109-217' Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:55.201 [INFO][4287] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" host="172-236-109-217" Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:55.216 [INFO][4287] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-109-217" Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:55.241 [INFO][4287] ipam/ipam.go 526: Trying affinity for 192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:55.244 [INFO][4287] ipam/ipam.go 160: Attempting to load block cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:55.248 [INFO][4287] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:55.248 [INFO][4287] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" host="172-236-109-217" Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:55.250 [INFO][4287] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:55.254 [INFO][4287] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" host="172-236-109-217" Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:55.264 [INFO][4287] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.98.7/26] block=192.168.98.0/26 handle="k8s-pod-network.eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" host="172-236-109-217" Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:55.264 [INFO][4287] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.98.7/26] handle="k8s-pod-network.eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" host="172-236-109-217" Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:55.264 [INFO][4287] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:55.367753 containerd[1580]: 2026-04-21 10:17:55.264 [INFO][4287] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.98.7/26] IPv6=[] ContainerID="eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" HandleID="k8s-pod-network.eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:17:55.368357 containerd[1580]: 2026-04-21 10:17:55.277 [INFO][4193] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqrvf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c7979636-3496-4985-b95e-0a670546c031", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"", Pod:"coredns-674b8bbfcf-jqrvf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24584b69f41", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:55.368357 containerd[1580]: 2026-04-21 10:17:55.277 [INFO][4193] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.7/32] ContainerID="eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqrvf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:17:55.368357 containerd[1580]: 2026-04-21 10:17:55.277 [INFO][4193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24584b69f41 ContainerID="eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqrvf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:17:55.368357 containerd[1580]: 2026-04-21 10:17:55.293 [INFO][4193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqrvf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:17:55.368357 containerd[1580]: 2026-04-21 10:17:55.309 [INFO][4193] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqrvf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c7979636-3496-4985-b95e-0a670546c031", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba", Pod:"coredns-674b8bbfcf-jqrvf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24584b69f41", MAC:"8a:14:ce:e1:9f:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:55.368357 containerd[1580]: 2026-04-21 10:17:55.337 [INFO][4193] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba" Namespace="kube-system" Pod="coredns-674b8bbfcf-jqrvf" WorkloadEndpoint="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:17:55.421634 containerd[1580]: time="2026-04-21T10:17:55.421468948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77558dd99f-hb6xz,Uid:cf487f18-8688-4b2b-baea-a5fd2415ecd5,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e\"" Apr 21 10:17:55.428023 containerd[1580]: time="2026-04-21T10:17:55.427188581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:55.428023 containerd[1580]: time="2026-04-21T10:17:55.427294061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:55.428023 containerd[1580]: time="2026-04-21T10:17:55.427315501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:55.428023 containerd[1580]: time="2026-04-21T10:17:55.427438722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:55.452271 kubelet[2738]: I0421 10:17:55.451972 2738 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2800f64-6b87-48b6-a3a0-976e85837c6a" path="/var/lib/kubelet/pods/e2800f64-6b87-48b6-a3a0-976e85837c6a/volumes" Apr 21 10:17:55.502762 containerd[1580]: time="2026-04-21T10:17:55.497893225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:55.502762 containerd[1580]: time="2026-04-21T10:17:55.497953706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:55.502762 containerd[1580]: time="2026-04-21T10:17:55.497967416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:55.502762 containerd[1580]: time="2026-04-21T10:17:55.498501987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:55.625567 containerd[1580]: time="2026-04-21T10:17:55.622428055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77558dd99f-m7lvk,Uid:a1c554e9-d39c-4613-be59-44522a1d3236,Namespace:calico-system,Attempt:1,} returns sandbox id \"703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f\"" Apr 21 10:17:55.702619 systemd-networkd[1241]: cali734803a3992: Link UP Apr 21 10:17:55.709049 containerd[1580]: time="2026-04-21T10:17:55.708981547Z" level=info msg="StartContainer for \"18606a53d43b8eb87f82b894341f042b7c6037cc142aed1137582024203e82e9\" returns successfully" Apr 21 10:17:55.711692 systemd-networkd[1241]: cali734803a3992: Gained carrier Apr 21 10:17:55.712314 containerd[1580]: time="2026-04-21T10:17:55.712283254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68558db9f8-nj78r,Uid:a1f95f96-3533-4369-810e-aac21a6a983c,Namespace:calico-system,Attempt:1,} returns sandbox id \"c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f\"" Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.198 [ERROR][4449] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.231 [INFO][4449] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--109--217-k8s-whisker--75bdd4d7fb--5wkk7-eth0 whisker-75bdd4d7fb- calico-system 7b2173ef-a51e-4466-a37c-1e0a3ba565ef 913 0 2026-04-21 10:17:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:75bdd4d7fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-236-109-217 whisker-75bdd4d7fb-5wkk7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali734803a3992 [] [] }} ContainerID="1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" Namespace="calico-system" Pod="whisker-75bdd4d7fb-5wkk7" WorkloadEndpoint="172--236--109--217-k8s-whisker--75bdd4d7fb--5wkk7-" Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.231 [INFO][4449] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" Namespace="calico-system" Pod="whisker-75bdd4d7fb-5wkk7" WorkloadEndpoint="172--236--109--217-k8s-whisker--75bdd4d7fb--5wkk7-eth0" Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.506 [INFO][4523] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" HandleID="k8s-pod-network.1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" Workload="172--236--109--217-k8s-whisker--75bdd4d7fb--5wkk7-eth0" Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.518 [INFO][4523] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" HandleID="k8s-pod-network.1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" Workload="172--236--109--217-k8s-whisker--75bdd4d7fb--5wkk7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000424330), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-109-217", "pod":"whisker-75bdd4d7fb-5wkk7", "timestamp":"2026-04-21 10:17:55.506365375 +0000 UTC"}, Hostname:"172-236-109-217", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000112840)} Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.518 [INFO][4523] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.519 [INFO][4523] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.519 [INFO][4523] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-109-217' Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.526 [INFO][4523] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" host="172-236-109-217" Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.551 [INFO][4523] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-109-217" Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.585 [INFO][4523] ipam/ipam.go 526: Trying affinity for 192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.604 [INFO][4523] ipam/ipam.go 160: Attempting to load block cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.618 [INFO][4523] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.98.0/26 host="172-236-109-217" Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.619 [INFO][4523] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.98.0/26 handle="k8s-pod-network.1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" host="172-236-109-217" Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.642 [INFO][4523] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54 Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.646 [INFO][4523] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.98.0/26 handle="k8s-pod-network.1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" host="172-236-109-217" Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.657 [INFO][4523] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.98.8/26] block=192.168.98.0/26 handle="k8s-pod-network.1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" host="172-236-109-217" Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.657 [INFO][4523] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.98.8/26] handle="k8s-pod-network.1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" host="172-236-109-217" Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.657 [INFO][4523] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:55.772432 containerd[1580]: 2026-04-21 10:17:55.657 [INFO][4523] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.98.8/26] IPv6=[] ContainerID="1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" HandleID="k8s-pod-network.1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" Workload="172--236--109--217-k8s-whisker--75bdd4d7fb--5wkk7-eth0" Apr 21 10:17:55.776134 containerd[1580]: 2026-04-21 10:17:55.673 [INFO][4449] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" Namespace="calico-system" Pod="whisker-75bdd4d7fb-5wkk7" WorkloadEndpoint="172--236--109--217-k8s-whisker--75bdd4d7fb--5wkk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-whisker--75bdd4d7fb--5wkk7-eth0", GenerateName:"whisker-75bdd4d7fb-", Namespace:"calico-system", SelfLink:"", UID:"7b2173ef-a51e-4466-a37c-1e0a3ba565ef", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75bdd4d7fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"", Pod:"whisker-75bdd4d7fb-5wkk7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.98.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali734803a3992", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:55.776134 containerd[1580]: 2026-04-21 10:17:55.673 [INFO][4449] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.98.8/32] ContainerID="1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" Namespace="calico-system" Pod="whisker-75bdd4d7fb-5wkk7" WorkloadEndpoint="172--236--109--217-k8s-whisker--75bdd4d7fb--5wkk7-eth0" Apr 21 10:17:55.776134 containerd[1580]: 2026-04-21 10:17:55.673 [INFO][4449] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali734803a3992 ContainerID="1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" Namespace="calico-system" Pod="whisker-75bdd4d7fb-5wkk7" WorkloadEndpoint="172--236--109--217-k8s-whisker--75bdd4d7fb--5wkk7-eth0" Apr 21 10:17:55.776134 containerd[1580]: 2026-04-21 10:17:55.722 [INFO][4449] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" Namespace="calico-system" Pod="whisker-75bdd4d7fb-5wkk7" WorkloadEndpoint="172--236--109--217-k8s-whisker--75bdd4d7fb--5wkk7-eth0" Apr 21 10:17:55.776134 containerd[1580]: 2026-04-21 10:17:55.728 [INFO][4449] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" Namespace="calico-system" Pod="whisker-75bdd4d7fb-5wkk7" WorkloadEndpoint="172--236--109--217-k8s-whisker--75bdd4d7fb--5wkk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-whisker--75bdd4d7fb--5wkk7-eth0", GenerateName:"whisker-75bdd4d7fb-", Namespace:"calico-system", SelfLink:"", UID:"7b2173ef-a51e-4466-a37c-1e0a3ba565ef", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75bdd4d7fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54", Pod:"whisker-75bdd4d7fb-5wkk7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.98.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali734803a3992", MAC:"86:8f:91:3e:0c:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:55.776134 containerd[1580]: 2026-04-21 10:17:55.753 [INFO][4449] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54" Namespace="calico-system" Pod="whisker-75bdd4d7fb-5wkk7" WorkloadEndpoint="172--236--109--217-k8s-whisker--75bdd4d7fb--5wkk7-eth0" Apr 21 10:17:55.841659 containerd[1580]: time="2026-04-21T10:17:55.841535774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:55.852694 containerd[1580]: time="2026-04-21T10:17:55.843646329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:55.852694 containerd[1580]: time="2026-04-21T10:17:55.843671189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:55.852694 containerd[1580]: time="2026-04-21T10:17:55.847617798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:55.870766 containerd[1580]: time="2026-04-21T10:17:55.869661240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jqrvf,Uid:c7979636-3496-4985-b95e-0a670546c031,Namespace:kube-system,Attempt:1,} returns sandbox id \"eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba\"" Apr 21 10:17:55.871701 kubelet[2738]: E0421 10:17:55.870942 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:55.876835 containerd[1580]: time="2026-04-21T10:17:55.876176585Z" level=info msg="CreateContainer within sandbox \"eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:17:55.898277 containerd[1580]: time="2026-04-21T10:17:55.898237506Z" level=info msg="CreateContainer within sandbox \"eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3cf94e13d6cf86840306c3a0eb8ead6ddf30a09a8763de7d9f103a06261e26ab\"" Apr 21 10:17:55.901797 containerd[1580]: time="2026-04-21T10:17:55.901765534Z" level=info msg="StartContainer for \"3cf94e13d6cf86840306c3a0eb8ead6ddf30a09a8763de7d9f103a06261e26ab\"" Apr 21 10:17:55.986272 containerd[1580]: time="2026-04-21T10:17:55.986241431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75bdd4d7fb-5wkk7,Uid:7b2173ef-a51e-4466-a37c-1e0a3ba565ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54\"" Apr 21 10:17:55.993738 containerd[1580]: time="2026-04-21T10:17:55.993615128Z" level=info msg="StartContainer for \"3cf94e13d6cf86840306c3a0eb8ead6ddf30a09a8763de7d9f103a06261e26ab\" returns successfully" Apr 21 10:17:56.053851 systemd-networkd[1241]: cali1cf085eada3: Gained IPv6LL Apr 21 10:17:56.117821 systemd-networkd[1241]: cali1fd748e199c: Gained IPv6LL Apr 21 10:17:56.143982 containerd[1580]: time="2026-04-21T10:17:56.143915605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:56.144992 containerd[1580]: time="2026-04-21T10:17:56.144947317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:17:56.145662 containerd[1580]: time="2026-04-21T10:17:56.145613929Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:56.147651 containerd[1580]: time="2026-04-21T10:17:56.147612384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:56.148671 containerd[1580]: time="2026-04-21T10:17:56.148410506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.111787833s" Apr 21 10:17:56.148671 containerd[1580]: time="2026-04-21T10:17:56.148438516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:17:56.149923 containerd[1580]: time="2026-04-21T10:17:56.149903659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:17:56.152399 containerd[1580]: time="2026-04-21T10:17:56.152370605Z" level=info msg="CreateContainer within sandbox \"848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:17:56.161808 containerd[1580]: time="2026-04-21T10:17:56.161781405Z" level=info msg="CreateContainer within sandbox \"848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b6679416bbaaa5a2bf215dfbf912f941ef32bbc6628a9ccd2d5d31fa88abdb51\"" Apr 21 10:17:56.162448 containerd[1580]: time="2026-04-21T10:17:56.162254127Z" level=info msg="StartContainer for \"b6679416bbaaa5a2bf215dfbf912f941ef32bbc6628a9ccd2d5d31fa88abdb51\"" Apr 21 10:17:56.182640 systemd-networkd[1241]: cali0129b023a06: Gained IPv6LL Apr 21 10:17:56.229628 containerd[1580]: time="2026-04-21T10:17:56.229501447Z" level=info msg="StartContainer for \"b6679416bbaaa5a2bf215dfbf912f941ef32bbc6628a9ccd2d5d31fa88abdb51\" returns successfully" Apr 21 10:17:56.438157 systemd-networkd[1241]: calia599fba41ae: Gained IPv6LL Apr 21 10:17:56.629878 systemd-networkd[1241]: cali54e2a32b412: Gained IPv6LL Apr 21 10:17:56.669572 kubelet[2738]: E0421 10:17:56.669164 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:56.674395 kubelet[2738]: E0421 10:17:56.673362 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:56.687173 kubelet[2738]: I0421 10:17:56.687124 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jqrvf" podStartSLOduration=24.687110623 podStartE2EDuration="24.687110623s" podCreationTimestamp="2026-04-21 10:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:17:56.680681499 +0000 UTC m=+31.345494565" watchObservedRunningTime="2026-04-21 10:17:56.687110623 +0000 UTC m=+31.351923659" Apr 21 10:17:56.698255 kubelet[2738]: I0421 10:17:56.698124 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4f7qf" podStartSLOduration=24.698113417 podStartE2EDuration="24.698113417s" podCreationTimestamp="2026-04-21 10:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:17:56.697615107 +0000 UTC m=+31.362428163" watchObservedRunningTime="2026-04-21 10:17:56.698113417 +0000 UTC m=+31.362926453" Apr 21 10:17:56.885999 systemd-networkd[1241]: cali24584b69f41: Gained IPv6LL Apr 21 10:17:57.077796 systemd-networkd[1241]: cali7cddeac89ed: Gained IPv6LL Apr 21 10:17:57.205670 systemd-networkd[1241]: cali734803a3992: Gained IPv6LL Apr 21 10:17:57.397818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753578991.mount: Deactivated successfully. Apr 21 10:17:57.682873 kubelet[2738]: E0421 10:17:57.682426 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:57.684952 kubelet[2738]: E0421 10:17:57.683896 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:57.775437 containerd[1580]: time="2026-04-21T10:17:57.775393782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:57.776403 containerd[1580]: time="2026-04-21T10:17:57.776320395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:17:57.777276 containerd[1580]: time="2026-04-21T10:17:57.776996826Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:57.780325 containerd[1580]: time="2026-04-21T10:17:57.780289813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:57.781237 containerd[1580]: time="2026-04-21T10:17:57.781213585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.630608245s" Apr 21 10:17:57.781313 containerd[1580]: time="2026-04-21T10:17:57.781297915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:17:57.787079 containerd[1580]: time="2026-04-21T10:17:57.787053328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:17:57.789565 containerd[1580]: time="2026-04-21T10:17:57.789084142Z" level=info msg="CreateContainer within sandbox \"63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:17:57.809453 containerd[1580]: time="2026-04-21T10:17:57.809419506Z" level=info msg="CreateContainer within sandbox \"63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a29bcba08a48d979cdd0971c14a75d27fccff9ef5026ceebcfb03f15cfbfff0a\"" Apr 21 10:17:57.810103 containerd[1580]: time="2026-04-21T10:17:57.810055298Z" level=info msg="StartContainer for \"a29bcba08a48d979cdd0971c14a75d27fccff9ef5026ceebcfb03f15cfbfff0a\"" Apr 21 10:17:57.909885 containerd[1580]: time="2026-04-21T10:17:57.909848144Z" level=info msg="StartContainer for \"a29bcba08a48d979cdd0971c14a75d27fccff9ef5026ceebcfb03f15cfbfff0a\" returns successfully" Apr 21 10:17:58.709273 kubelet[2738]: E0421 10:17:58.709239 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:17:59.366408 containerd[1580]: time="2026-04-21T10:17:59.366358613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:59.367392 containerd[1580]: time="2026-04-21T10:17:59.367213095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:17:59.368033 containerd[1580]: time="2026-04-21T10:17:59.367990346Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:59.369848 containerd[1580]: time="2026-04-21T10:17:59.369810159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:59.370757 containerd[1580]: time="2026-04-21T10:17:59.370627081Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.583092331s" Apr 21 10:17:59.370757 containerd[1580]: time="2026-04-21T10:17:59.370655071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:17:59.372667 containerd[1580]: time="2026-04-21T10:17:59.372614615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:17:59.376126 containerd[1580]: time="2026-04-21T10:17:59.376100093Z" level=info msg="CreateContainer within sandbox \"eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:17:59.395521 containerd[1580]: time="2026-04-21T10:17:59.395449842Z" level=info msg="CreateContainer within sandbox \"eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"35954a541abbddea41de88e5feebe0f5d8a5c1c630d9fbbc041a095817137fee\"" Apr 21 10:17:59.396133 containerd[1580]: time="2026-04-21T10:17:59.396021823Z" level=info msg="StartContainer for \"35954a541abbddea41de88e5feebe0f5d8a5c1c630d9fbbc041a095817137fee\"" Apr 21 10:17:59.441452 systemd[1]: run-containerd-runc-k8s.io-35954a541abbddea41de88e5feebe0f5d8a5c1c630d9fbbc041a095817137fee-runc.OvPb1X.mount: Deactivated successfully. Apr 21 10:17:59.487441 containerd[1580]: time="2026-04-21T10:17:59.487402688Z" level=info msg="StartContainer for \"35954a541abbddea41de88e5feebe0f5d8a5c1c630d9fbbc041a095817137fee\" returns successfully" Apr 21 10:17:59.561902 containerd[1580]: time="2026-04-21T10:17:59.561843258Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:59.565211 containerd[1580]: time="2026-04-21T10:17:59.565165015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:17:59.567883 containerd[1580]: time="2026-04-21T10:17:59.567822940Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 195.179685ms" Apr 21 10:17:59.567970 containerd[1580]: time="2026-04-21T10:17:59.567888281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:17:59.571962 containerd[1580]: time="2026-04-21T10:17:59.571919699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:17:59.577890 containerd[1580]: time="2026-04-21T10:17:59.577853800Z" level=info msg="CreateContainer within sandbox \"703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:17:59.594360 containerd[1580]: time="2026-04-21T10:17:59.594282534Z" level=info msg="CreateContainer within sandbox \"703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7e9351815f44fdf52dd19c4ebfed9a96dfe29510e8395962decc927eb5ae5f16\"" Apr 21 10:17:59.596367 containerd[1580]: time="2026-04-21T10:17:59.596264628Z" level=info msg="StartContainer for \"7e9351815f44fdf52dd19c4ebfed9a96dfe29510e8395962decc927eb5ae5f16\"" Apr 21 10:17:59.694399 containerd[1580]: time="2026-04-21T10:17:59.694166876Z" level=info msg="StartContainer for \"7e9351815f44fdf52dd19c4ebfed9a96dfe29510e8395962decc927eb5ae5f16\" returns successfully" Apr 21 10:17:59.719859 kubelet[2738]: I0421 10:17:59.719827 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:17:59.724567 kubelet[2738]: I0421 10:17:59.724508 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-77558dd99f-m7lvk" podStartSLOduration=14.793747635999999 podStartE2EDuration="18.724494967s" podCreationTimestamp="2026-04-21 10:17:41 +0000 UTC" firstStartedPulling="2026-04-21 10:17:55.639501614 +0000 UTC m=+30.304314650" lastFinishedPulling="2026-04-21 10:17:59.570248935 +0000 UTC m=+34.235061981" observedRunningTime="2026-04-21 10:17:59.723701116 +0000 UTC m=+34.388514152" watchObservedRunningTime="2026-04-21 10:17:59.724494967 +0000 UTC m=+34.389308003" Apr 21 10:17:59.724796 kubelet[2738]: I0421 10:17:59.724768 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-pskfx" podStartSLOduration=15.090928214 podStartE2EDuration="17.724761318s" podCreationTimestamp="2026-04-21 10:17:42 +0000 UTC" firstStartedPulling="2026-04-21 10:17:55.152468332 +0000 UTC m=+29.817281368" lastFinishedPulling="2026-04-21 10:17:57.786301426 +0000 UTC m=+32.451114472" observedRunningTime="2026-04-21 10:17:58.728738324 +0000 UTC m=+33.393551370" watchObservedRunningTime="2026-04-21 10:17:59.724761318 +0000 UTC m=+34.389574354" Apr 21 10:18:00.397699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2039465257.mount: Deactivated successfully. Apr 21 10:18:00.725645 kubelet[2738]: I0421 10:18:00.719370 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:18:00.771471 kubelet[2738]: I0421 10:18:00.771415 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-77558dd99f-hb6xz" podStartSLOduration=15.824339948 podStartE2EDuration="19.771398797s" podCreationTimestamp="2026-04-21 10:17:41 +0000 UTC" firstStartedPulling="2026-04-21 10:17:55.424879705 +0000 UTC m=+30.089692741" lastFinishedPulling="2026-04-21 10:17:59.371938554 +0000 UTC m=+34.036751590" observedRunningTime="2026-04-21 10:17:59.739187567 +0000 UTC m=+34.404000603" watchObservedRunningTime="2026-04-21 10:18:00.771398797 +0000 UTC m=+35.436211833" Apr 21 10:18:01.557113 containerd[1580]: time="2026-04-21T10:18:01.557037843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:01.559586 containerd[1580]: time="2026-04-21T10:18:01.558828846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:18:01.559586 containerd[1580]: time="2026-04-21T10:18:01.559444848Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:01.563575 containerd[1580]: time="2026-04-21T10:18:01.562676784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:01.563575 containerd[1580]: time="2026-04-21T10:18:01.563347975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 1.991391226s" Apr 21 10:18:01.563575 containerd[1580]: time="2026-04-21T10:18:01.563371455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:18:01.565130 containerd[1580]: time="2026-04-21T10:18:01.565103298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:18:01.588572 containerd[1580]: time="2026-04-21T10:18:01.588398613Z" level=info msg="CreateContainer within sandbox \"c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:18:01.601818 containerd[1580]: time="2026-04-21T10:18:01.599992564Z" level=info msg="CreateContainer within sandbox \"c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a33de61d9c1eb1f42f5589700fec279af867ffd3c1b43735f83c56caa4156e69\"" Apr 21 10:18:01.603029 containerd[1580]: time="2026-04-21T10:18:01.602485789Z" level=info msg="StartContainer for \"a33de61d9c1eb1f42f5589700fec279af867ffd3c1b43735f83c56caa4156e69\"" Apr 21 10:18:01.711572 containerd[1580]: time="2026-04-21T10:18:01.710791535Z" level=info msg="StartContainer for \"a33de61d9c1eb1f42f5589700fec279af867ffd3c1b43735f83c56caa4156e69\" returns successfully" Apr 21 10:18:01.773451 kubelet[2738]: I0421 10:18:01.773375 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-68558db9f8-nj78r" podStartSLOduration=13.935966354 podStartE2EDuration="19.773333113s" podCreationTimestamp="2026-04-21 10:17:42 +0000 UTC" firstStartedPulling="2026-04-21 10:17:55.726936297 +0000 UTC m=+30.391749333" lastFinishedPulling="2026-04-21 10:18:01.564303046 +0000 UTC m=+36.229116092" observedRunningTime="2026-04-21 10:18:01.772956683 +0000 UTC m=+36.437769739" watchObservedRunningTime="2026-04-21 10:18:01.773333113 +0000 UTC m=+36.438146169" Apr 21 10:18:01.883520 systemd-journald[1160]: Under memory pressure, flushing caches. Apr 21 10:18:01.878762 systemd-resolved[1472]: Under memory pressure, flushing caches. Apr 21 10:18:01.878816 systemd-resolved[1472]: Flushed all caches. Apr 21 10:18:02.366589 containerd[1580]: time="2026-04-21T10:18:02.366427809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:02.372313 containerd[1580]: time="2026-04-21T10:18:02.371474958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:18:02.375991 containerd[1580]: time="2026-04-21T10:18:02.375373716Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:02.379995 containerd[1580]: time="2026-04-21T10:18:02.379258063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:02.379995 containerd[1580]: time="2026-04-21T10:18:02.379730034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 814.597826ms" Apr 21 10:18:02.379995 containerd[1580]: time="2026-04-21T10:18:02.379757735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:18:02.381989 containerd[1580]: time="2026-04-21T10:18:02.381954988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:18:02.383851 containerd[1580]: time="2026-04-21T10:18:02.383822332Z" level=info msg="CreateContainer within sandbox \"1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:18:02.400128 containerd[1580]: time="2026-04-21T10:18:02.400106462Z" level=info msg="CreateContainer within sandbox \"1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9eb1b3facb77095a1b62795a6facb1b768b8a95945da15123cf033d15bce1004\"" Apr 21 10:18:02.401261 containerd[1580]: time="2026-04-21T10:18:02.401233984Z" level=info msg="StartContainer for \"9eb1b3facb77095a1b62795a6facb1b768b8a95945da15123cf033d15bce1004\"" Apr 21 10:18:02.474861 containerd[1580]: time="2026-04-21T10:18:02.474780480Z" level=info msg="StartContainer for \"9eb1b3facb77095a1b62795a6facb1b768b8a95945da15123cf033d15bce1004\" returns successfully" Apr 21 10:18:03.504004 containerd[1580]: time="2026-04-21T10:18:03.503940980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:03.504999 containerd[1580]: time="2026-04-21T10:18:03.504862462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:18:03.505517 containerd[1580]: time="2026-04-21T10:18:03.505491072Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:03.508018 containerd[1580]: time="2026-04-21T10:18:03.507987617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:03.508896 containerd[1580]: time="2026-04-21T10:18:03.508788029Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.126803131s" Apr 21 10:18:03.508896 containerd[1580]: time="2026-04-21T10:18:03.508817949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:18:03.510284 containerd[1580]: time="2026-04-21T10:18:03.510235881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:18:03.514279 containerd[1580]: time="2026-04-21T10:18:03.514179758Z" level=info msg="CreateContainer within sandbox \"848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:18:03.529057 containerd[1580]: time="2026-04-21T10:18:03.529032145Z" level=info msg="CreateContainer within sandbox \"848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0395fd1c3327c7b15dd61a21a2f445c6255e3d59dca25b6b33b895f4e8c300b2\"" Apr 21 10:18:03.532284 containerd[1580]: time="2026-04-21T10:18:03.529735726Z" level=info msg="StartContainer for \"0395fd1c3327c7b15dd61a21a2f445c6255e3d59dca25b6b33b895f4e8c300b2\"" Apr 21 10:18:03.530312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3576407178.mount: Deactivated successfully. Apr 21 10:18:03.614829 containerd[1580]: time="2026-04-21T10:18:03.614737678Z" level=info msg="StartContainer for \"0395fd1c3327c7b15dd61a21a2f445c6255e3d59dca25b6b33b895f4e8c300b2\" returns successfully" Apr 21 10:18:04.547520 kubelet[2738]: I0421 10:18:04.547396 2738 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:18:04.549187 kubelet[2738]: I0421 10:18:04.548594 2738 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:18:04.574688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2287347032.mount: Deactivated successfully. Apr 21 10:18:04.585014 containerd[1580]: time="2026-04-21T10:18:04.584966376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:04.586225 containerd[1580]: time="2026-04-21T10:18:04.586056389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:18:04.586225 containerd[1580]: time="2026-04-21T10:18:04.586193169Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:04.588610 containerd[1580]: time="2026-04-21T10:18:04.588209912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:04.589716 containerd[1580]: time="2026-04-21T10:18:04.589083694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.078822383s" Apr 21 10:18:04.589716 containerd[1580]: time="2026-04-21T10:18:04.589110324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:18:04.592709 containerd[1580]: time="2026-04-21T10:18:04.592685880Z" level=info msg="CreateContainer within sandbox \"1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:18:04.606356 containerd[1580]: time="2026-04-21T10:18:04.606332374Z" level=info msg="CreateContainer within sandbox \"1394e4328d0ec176c15244efcc7d1b03349df0642b9fde74924b0a9b9f349e54\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"97f87699a293ec1c43108142758688304167f6a0b3f23e2c1171d9eea5bbbbb9\"" Apr 21 10:18:04.606947 containerd[1580]: time="2026-04-21T10:18:04.606690004Z" level=info msg="StartContainer for \"97f87699a293ec1c43108142758688304167f6a0b3f23e2c1171d9eea5bbbbb9\"" Apr 21 10:18:04.663505 systemd[1]: run-containerd-runc-k8s.io-97f87699a293ec1c43108142758688304167f6a0b3f23e2c1171d9eea5bbbbb9-runc.QHzf7j.mount: Deactivated successfully. Apr 21 10:18:04.736006 containerd[1580]: time="2026-04-21T10:18:04.735972199Z" level=info msg="StartContainer for \"97f87699a293ec1c43108142758688304167f6a0b3f23e2c1171d9eea5bbbbb9\" returns successfully" Apr 21 10:18:04.756147 kubelet[2738]: I0421 10:18:04.756019 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-75bdd4d7fb-5wkk7" podStartSLOduration=2.153473852 podStartE2EDuration="10.756004284s" podCreationTimestamp="2026-04-21 10:17:54 +0000 UTC" firstStartedPulling="2026-04-21 10:17:55.987727284 +0000 UTC m=+30.652540320" lastFinishedPulling="2026-04-21 10:18:04.590257716 +0000 UTC m=+39.255070752" observedRunningTime="2026-04-21 10:18:04.75329399 +0000 UTC m=+39.418107026" watchObservedRunningTime="2026-04-21 10:18:04.756004284 +0000 UTC m=+39.420817320" Apr 21 10:18:04.756314 kubelet[2738]: I0421 10:18:04.756207 2738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zjz5l" podStartSLOduration=14.282120156 podStartE2EDuration="22.756203235s" podCreationTimestamp="2026-04-21 10:17:42 +0000 UTC" firstStartedPulling="2026-04-21 10:17:55.035854692 +0000 UTC m=+29.700667728" lastFinishedPulling="2026-04-21 10:18:03.509937771 +0000 UTC m=+38.174750807" observedRunningTime="2026-04-21 10:18:03.754633848 +0000 UTC m=+38.419446904" watchObservedRunningTime="2026-04-21 10:18:04.756203235 +0000 UTC m=+39.421016281" Apr 21 10:18:08.410209 kubelet[2738]: I0421 10:18:08.410090 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:18:08.410720 kubelet[2738]: E0421 10:18:08.410453 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:18:08.752862 kubelet[2738]: E0421 10:18:08.752750 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:18:09.261577 kernel: calico-node[5468]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:18:09.634648 kubelet[2738]: I0421 10:18:09.632649 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:18:09.859457 systemd-networkd[1241]: vxlan.calico: Link UP Apr 21 10:18:09.859476 systemd-networkd[1241]: vxlan.calico: Gained carrier Apr 21 10:18:11.478604 systemd-networkd[1241]: vxlan.calico: Gained IPv6LL Apr 21 10:18:22.322623 kubelet[2738]: I0421 10:18:22.322500 2738 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:18:25.439813 containerd[1580]: time="2026-04-21T10:18:25.439772940Z" level=info msg="StopPodSandbox for \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\"" Apr 21 10:18:25.513146 containerd[1580]: 2026-04-21 10:18:25.476 [WARNING][5691] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0", GenerateName:"calico-apiserver-77558dd99f-", Namespace:"calico-system", SelfLink:"", UID:"a1c554e9-d39c-4613-be59-44522a1d3236", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77558dd99f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f", Pod:"calico-apiserver-77558dd99f-m7lvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7cddeac89ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:25.513146 containerd[1580]: 2026-04-21 10:18:25.476 [INFO][5691] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Apr 21 10:18:25.513146 containerd[1580]: 2026-04-21 10:18:25.476 [INFO][5691] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" iface="eth0" netns="" Apr 21 10:18:25.513146 containerd[1580]: 2026-04-21 10:18:25.476 [INFO][5691] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Apr 21 10:18:25.513146 containerd[1580]: 2026-04-21 10:18:25.476 [INFO][5691] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Apr 21 10:18:25.513146 containerd[1580]: 2026-04-21 10:18:25.500 [INFO][5700] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" HandleID="k8s-pod-network.50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:18:25.513146 containerd[1580]: 2026-04-21 10:18:25.500 [INFO][5700] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:25.513146 containerd[1580]: 2026-04-21 10:18:25.501 [INFO][5700] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:25.513146 containerd[1580]: 2026-04-21 10:18:25.507 [WARNING][5700] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" HandleID="k8s-pod-network.50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:18:25.513146 containerd[1580]: 2026-04-21 10:18:25.507 [INFO][5700] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" HandleID="k8s-pod-network.50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:18:25.513146 containerd[1580]: 2026-04-21 10:18:25.508 [INFO][5700] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:25.513146 containerd[1580]: 2026-04-21 10:18:25.510 [INFO][5691] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Apr 21 10:18:25.513757 containerd[1580]: time="2026-04-21T10:18:25.513183497Z" level=info msg="TearDown network for sandbox \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\" successfully" Apr 21 10:18:25.513757 containerd[1580]: time="2026-04-21T10:18:25.513207087Z" level=info msg="StopPodSandbox for \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\" returns successfully" Apr 21 10:18:25.513878 containerd[1580]: time="2026-04-21T10:18:25.513857878Z" level=info msg="RemovePodSandbox for \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\"" Apr 21 10:18:25.513905 containerd[1580]: time="2026-04-21T10:18:25.513885738Z" level=info msg="Forcibly stopping sandbox \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\"" Apr 21 10:18:25.594667 containerd[1580]: 2026-04-21 10:18:25.560 [WARNING][5714] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0", GenerateName:"calico-apiserver-77558dd99f-", Namespace:"calico-system", SelfLink:"", UID:"a1c554e9-d39c-4613-be59-44522a1d3236", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77558dd99f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"703fc66d40d38ee628639a04a4d3e69d0dea88a6a43e15ec8bde3dbed88b863f", Pod:"calico-apiserver-77558dd99f-m7lvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali7cddeac89ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:25.594667 containerd[1580]: 2026-04-21 10:18:25.561 [INFO][5714] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Apr 21 10:18:25.594667 containerd[1580]: 2026-04-21 10:18:25.561 [INFO][5714] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" iface="eth0" netns="" Apr 21 10:18:25.594667 containerd[1580]: 2026-04-21 10:18:25.561 [INFO][5714] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Apr 21 10:18:25.594667 containerd[1580]: 2026-04-21 10:18:25.561 [INFO][5714] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Apr 21 10:18:25.594667 containerd[1580]: 2026-04-21 10:18:25.583 [INFO][5721] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" HandleID="k8s-pod-network.50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:18:25.594667 containerd[1580]: 2026-04-21 10:18:25.583 [INFO][5721] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:25.594667 containerd[1580]: 2026-04-21 10:18:25.583 [INFO][5721] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:25.594667 containerd[1580]: 2026-04-21 10:18:25.588 [WARNING][5721] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" HandleID="k8s-pod-network.50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:18:25.594667 containerd[1580]: 2026-04-21 10:18:25.588 [INFO][5721] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" HandleID="k8s-pod-network.50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--m7lvk-eth0" Apr 21 10:18:25.594667 containerd[1580]: 2026-04-21 10:18:25.589 [INFO][5721] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:25.594667 containerd[1580]: 2026-04-21 10:18:25.592 [INFO][5714] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a" Apr 21 10:18:25.595138 containerd[1580]: time="2026-04-21T10:18:25.594698434Z" level=info msg="TearDown network for sandbox \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\" successfully" Apr 21 10:18:25.598833 containerd[1580]: time="2026-04-21T10:18:25.598802080Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:18:25.598939 containerd[1580]: time="2026-04-21T10:18:25.598868040Z" level=info msg="RemovePodSandbox \"50689c2d39228bcc2daa0270f5f71cf82d42ce34f4923fddc0aef4f87c31739a\" returns successfully" Apr 21 10:18:25.599345 containerd[1580]: time="2026-04-21T10:18:25.599319890Z" level=info msg="StopPodSandbox for \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\"" Apr 21 10:18:25.662387 containerd[1580]: 2026-04-21 10:18:25.631 [WARNING][5736] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"6d602343-b06f-4a79-9735-f86cba637f01", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb", Pod:"goldmane-5b85766d88-pskfx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.98.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1cf085eada3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:25.662387 containerd[1580]: 2026-04-21 10:18:25.631 [INFO][5736] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Apr 21 10:18:25.662387 containerd[1580]: 2026-04-21 10:18:25.631 [INFO][5736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" iface="eth0" netns="" Apr 21 10:18:25.662387 containerd[1580]: 2026-04-21 10:18:25.631 [INFO][5736] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Apr 21 10:18:25.662387 containerd[1580]: 2026-04-21 10:18:25.631 [INFO][5736] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Apr 21 10:18:25.662387 containerd[1580]: 2026-04-21 10:18:25.651 [INFO][5743] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" HandleID="k8s-pod-network.1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Workload="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:18:25.662387 containerd[1580]: 2026-04-21 10:18:25.651 [INFO][5743] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:25.662387 containerd[1580]: 2026-04-21 10:18:25.651 [INFO][5743] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:25.662387 containerd[1580]: 2026-04-21 10:18:25.656 [WARNING][5743] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" HandleID="k8s-pod-network.1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Workload="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:18:25.662387 containerd[1580]: 2026-04-21 10:18:25.656 [INFO][5743] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" HandleID="k8s-pod-network.1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Workload="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:18:25.662387 containerd[1580]: 2026-04-21 10:18:25.658 [INFO][5743] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:25.662387 containerd[1580]: 2026-04-21 10:18:25.660 [INFO][5736] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Apr 21 10:18:25.662387 containerd[1580]: time="2026-04-21T10:18:25.662254265Z" level=info msg="TearDown network for sandbox \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\" successfully" Apr 21 10:18:25.662387 containerd[1580]: time="2026-04-21T10:18:25.662287435Z" level=info msg="StopPodSandbox for \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\" returns successfully" Apr 21 10:18:25.663361 containerd[1580]: time="2026-04-21T10:18:25.662748855Z" level=info msg="RemovePodSandbox for \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\"" Apr 21 10:18:25.663361 containerd[1580]: time="2026-04-21T10:18:25.662778176Z" level=info msg="Forcibly stopping sandbox \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\"" Apr 21 10:18:25.699821 systemd[1]: run-containerd-runc-k8s.io-eb9c7bbec6c73d52548a92a04d0447bb96225350e84abba1152e08029d33938e-runc.f8OCs1.mount: Deactivated successfully. Apr 21 10:18:25.741571 containerd[1580]: 2026-04-21 10:18:25.707 [WARNING][5763] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"6d602343-b06f-4a79-9735-f86cba637f01", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"63dd939d72c7c8dbb2baa5649a7e2ce8db3cf876776e8f0a333a33f2b62becdb", Pod:"goldmane-5b85766d88-pskfx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.98.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1cf085eada3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:25.741571 containerd[1580]: 2026-04-21 10:18:25.708 [INFO][5763] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Apr 21 10:18:25.741571 containerd[1580]: 2026-04-21 10:18:25.708 [INFO][5763] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" iface="eth0" netns="" Apr 21 10:18:25.741571 containerd[1580]: 2026-04-21 10:18:25.708 [INFO][5763] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Apr 21 10:18:25.741571 containerd[1580]: 2026-04-21 10:18:25.708 [INFO][5763] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Apr 21 10:18:25.741571 containerd[1580]: 2026-04-21 10:18:25.730 [INFO][5780] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" HandleID="k8s-pod-network.1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Workload="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:18:25.741571 containerd[1580]: 2026-04-21 10:18:25.730 [INFO][5780] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:25.741571 containerd[1580]: 2026-04-21 10:18:25.730 [INFO][5780] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:25.741571 containerd[1580]: 2026-04-21 10:18:25.735 [WARNING][5780] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" HandleID="k8s-pod-network.1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Workload="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:18:25.741571 containerd[1580]: 2026-04-21 10:18:25.735 [INFO][5780] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" HandleID="k8s-pod-network.1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Workload="172--236--109--217-k8s-goldmane--5b85766d88--pskfx-eth0" Apr 21 10:18:25.741571 containerd[1580]: 2026-04-21 10:18:25.737 [INFO][5780] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:25.741571 containerd[1580]: 2026-04-21 10:18:25.739 [INFO][5763] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d" Apr 21 10:18:25.741571 containerd[1580]: time="2026-04-21T10:18:25.741330019Z" level=info msg="TearDown network for sandbox \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\" successfully" Apr 21 10:18:25.745398 containerd[1580]: time="2026-04-21T10:18:25.745117324Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:18:25.745398 containerd[1580]: time="2026-04-21T10:18:25.745161584Z" level=info msg="RemovePodSandbox \"1e0ff3283d8cd1f1ab907d5069c12585d66dc555644a73d28127d464fff53e1d\" returns successfully" Apr 21 10:18:25.746020 containerd[1580]: time="2026-04-21T10:18:25.745784654Z" level=info msg="StopPodSandbox for \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\"" Apr 21 10:18:25.830838 containerd[1580]: 2026-04-21 10:18:25.792 [WARNING][5800] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" WorkloadEndpoint="172--236--109--217-k8s-whisker--7f9f6dd55--69dxw-eth0" Apr 21 10:18:25.830838 containerd[1580]: 2026-04-21 10:18:25.793 [INFO][5800] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Apr 21 10:18:25.830838 containerd[1580]: 2026-04-21 10:18:25.793 [INFO][5800] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" iface="eth0" netns="" Apr 21 10:18:25.830838 containerd[1580]: 2026-04-21 10:18:25.793 [INFO][5800] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Apr 21 10:18:25.830838 containerd[1580]: 2026-04-21 10:18:25.793 [INFO][5800] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Apr 21 10:18:25.830838 containerd[1580]: 2026-04-21 10:18:25.819 [INFO][5807] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" HandleID="k8s-pod-network.f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Workload="172--236--109--217-k8s-whisker--7f9f6dd55--69dxw-eth0" Apr 21 10:18:25.830838 containerd[1580]: 2026-04-21 10:18:25.819 [INFO][5807] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:25.830838 containerd[1580]: 2026-04-21 10:18:25.819 [INFO][5807] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:25.830838 containerd[1580]: 2026-04-21 10:18:25.825 [WARNING][5807] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" HandleID="k8s-pod-network.f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Workload="172--236--109--217-k8s-whisker--7f9f6dd55--69dxw-eth0" Apr 21 10:18:25.830838 containerd[1580]: 2026-04-21 10:18:25.825 [INFO][5807] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" HandleID="k8s-pod-network.f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Workload="172--236--109--217-k8s-whisker--7f9f6dd55--69dxw-eth0" Apr 21 10:18:25.830838 containerd[1580]: 2026-04-21 10:18:25.826 [INFO][5807] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:25.830838 containerd[1580]: 2026-04-21 10:18:25.828 [INFO][5800] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Apr 21 10:18:25.831234 containerd[1580]: time="2026-04-21T10:18:25.831191856Z" level=info msg="TearDown network for sandbox \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\" successfully" Apr 21 10:18:25.831269 containerd[1580]: time="2026-04-21T10:18:25.831235196Z" level=info msg="StopPodSandbox for \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\" returns successfully" Apr 21 10:18:25.831757 containerd[1580]: time="2026-04-21T10:18:25.831738846Z" level=info msg="RemovePodSandbox for \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\"" Apr 21 10:18:25.831794 containerd[1580]: time="2026-04-21T10:18:25.831762566Z" level=info msg="Forcibly stopping sandbox \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\"" Apr 21 10:18:25.897823 containerd[1580]: 2026-04-21 10:18:25.864 [WARNING][5821] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" WorkloadEndpoint="172--236--109--217-k8s-whisker--7f9f6dd55--69dxw-eth0" Apr 21 10:18:25.897823 containerd[1580]: 2026-04-21 10:18:25.864 [INFO][5821] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Apr 21 10:18:25.897823 containerd[1580]: 2026-04-21 10:18:25.864 [INFO][5821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" iface="eth0" netns="" Apr 21 10:18:25.897823 containerd[1580]: 2026-04-21 10:18:25.864 [INFO][5821] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Apr 21 10:18:25.897823 containerd[1580]: 2026-04-21 10:18:25.864 [INFO][5821] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Apr 21 10:18:25.897823 containerd[1580]: 2026-04-21 10:18:25.885 [INFO][5829] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" HandleID="k8s-pod-network.f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Workload="172--236--109--217-k8s-whisker--7f9f6dd55--69dxw-eth0" Apr 21 10:18:25.897823 containerd[1580]: 2026-04-21 10:18:25.885 [INFO][5829] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:25.897823 containerd[1580]: 2026-04-21 10:18:25.885 [INFO][5829] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:25.897823 containerd[1580]: 2026-04-21 10:18:25.891 [WARNING][5829] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" HandleID="k8s-pod-network.f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Workload="172--236--109--217-k8s-whisker--7f9f6dd55--69dxw-eth0" Apr 21 10:18:25.897823 containerd[1580]: 2026-04-21 10:18:25.891 [INFO][5829] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" HandleID="k8s-pod-network.f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Workload="172--236--109--217-k8s-whisker--7f9f6dd55--69dxw-eth0" Apr 21 10:18:25.897823 containerd[1580]: 2026-04-21 10:18:25.892 [INFO][5829] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:25.897823 containerd[1580]: 2026-04-21 10:18:25.895 [INFO][5821] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34" Apr 21 10:18:25.898280 containerd[1580]: time="2026-04-21T10:18:25.897865596Z" level=info msg="TearDown network for sandbox \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\" successfully" Apr 21 10:18:25.901198 containerd[1580]: time="2026-04-21T10:18:25.901158569Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:18:25.901248 containerd[1580]: time="2026-04-21T10:18:25.901208159Z" level=info msg="RemovePodSandbox \"f9c9ff7d381663617b1ce1b1693017c5fa27016c48ee886f14001b08efba7c34\" returns successfully" Apr 21 10:18:25.901663 containerd[1580]: time="2026-04-21T10:18:25.901643859Z" level=info msg="StopPodSandbox for \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\"" Apr 21 10:18:25.968915 containerd[1580]: 2026-04-21 10:18:25.934 [WARNING][5844] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4c75f2b0-116b-43c2-af35-9fd375fcc220", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b", Pod:"coredns-674b8bbfcf-4f7qf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fd748e199c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:25.968915 containerd[1580]: 2026-04-21 10:18:25.934 [INFO][5844] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Apr 21 10:18:25.968915 containerd[1580]: 2026-04-21 10:18:25.934 [INFO][5844] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" iface="eth0" netns="" Apr 21 10:18:25.968915 containerd[1580]: 2026-04-21 10:18:25.934 [INFO][5844] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Apr 21 10:18:25.968915 containerd[1580]: 2026-04-21 10:18:25.934 [INFO][5844] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Apr 21 10:18:25.968915 containerd[1580]: 2026-04-21 10:18:25.957 [INFO][5852] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" HandleID="k8s-pod-network.e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:18:25.968915 containerd[1580]: 2026-04-21 10:18:25.957 [INFO][5852] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:25.968915 containerd[1580]: 2026-04-21 10:18:25.957 [INFO][5852] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:25.968915 containerd[1580]: 2026-04-21 10:18:25.962 [WARNING][5852] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" HandleID="k8s-pod-network.e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:18:25.968915 containerd[1580]: 2026-04-21 10:18:25.962 [INFO][5852] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" HandleID="k8s-pod-network.e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:18:25.968915 containerd[1580]: 2026-04-21 10:18:25.963 [INFO][5852] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:25.968915 containerd[1580]: 2026-04-21 10:18:25.966 [INFO][5844] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Apr 21 10:18:25.968915 containerd[1580]: time="2026-04-21T10:18:25.968580099Z" level=info msg="TearDown network for sandbox \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\" successfully" Apr 21 10:18:25.968915 containerd[1580]: time="2026-04-21T10:18:25.968603219Z" level=info msg="StopPodSandbox for \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\" returns successfully" Apr 21 10:18:25.970870 containerd[1580]: time="2026-04-21T10:18:25.970536591Z" level=info msg="RemovePodSandbox for \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\"" Apr 21 10:18:25.970870 containerd[1580]: time="2026-04-21T10:18:25.970582282Z" level=info msg="Forcibly stopping sandbox \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\"" Apr 21 10:18:26.057659 containerd[1580]: 2026-04-21 10:18:26.004 [WARNING][5866] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4c75f2b0-116b-43c2-af35-9fd375fcc220", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"ab3657cc93e5852bdfec399a4ac834f8d612d718b6e43be1569b46e7fb90a79b", Pod:"coredns-674b8bbfcf-4f7qf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fd748e199c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:26.057659 containerd[1580]: 2026-04-21 10:18:26.004 [INFO][5866] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Apr 21 10:18:26.057659 containerd[1580]: 2026-04-21 10:18:26.004 [INFO][5866] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" iface="eth0" netns="" Apr 21 10:18:26.057659 containerd[1580]: 2026-04-21 10:18:26.004 [INFO][5866] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Apr 21 10:18:26.057659 containerd[1580]: 2026-04-21 10:18:26.004 [INFO][5866] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Apr 21 10:18:26.057659 containerd[1580]: 2026-04-21 10:18:26.041 [INFO][5873] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" HandleID="k8s-pod-network.e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:18:26.057659 containerd[1580]: 2026-04-21 10:18:26.041 [INFO][5873] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:26.057659 containerd[1580]: 2026-04-21 10:18:26.041 [INFO][5873] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:26.057659 containerd[1580]: 2026-04-21 10:18:26.048 [WARNING][5873] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" HandleID="k8s-pod-network.e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:18:26.057659 containerd[1580]: 2026-04-21 10:18:26.048 [INFO][5873] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" HandleID="k8s-pod-network.e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--4f7qf-eth0" Apr 21 10:18:26.057659 containerd[1580]: 2026-04-21 10:18:26.049 [INFO][5873] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:26.057659 containerd[1580]: 2026-04-21 10:18:26.052 [INFO][5866] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074" Apr 21 10:18:26.058436 containerd[1580]: time="2026-04-21T10:18:26.057831255Z" level=info msg="TearDown network for sandbox \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\" successfully" Apr 21 10:18:26.062143 containerd[1580]: time="2026-04-21T10:18:26.062108800Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:18:26.062221 containerd[1580]: time="2026-04-21T10:18:26.062164550Z" level=info msg="RemovePodSandbox \"e6ec98758b5cf73757e8aca16671802433e7f79d074aaff47aa7ea747d364074\" returns successfully" Apr 21 10:18:26.062614 containerd[1580]: time="2026-04-21T10:18:26.062592690Z" level=info msg="StopPodSandbox for \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\"" Apr 21 10:18:26.147191 containerd[1580]: 2026-04-21 10:18:26.103 [WARNING][5888] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c7979636-3496-4985-b95e-0a670546c031", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba", Pod:"coredns-674b8bbfcf-jqrvf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24584b69f41", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:26.147191 containerd[1580]: 2026-04-21 10:18:26.104 [INFO][5888] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Apr 21 10:18:26.147191 containerd[1580]: 2026-04-21 10:18:26.104 [INFO][5888] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" iface="eth0" netns="" Apr 21 10:18:26.147191 containerd[1580]: 2026-04-21 10:18:26.104 [INFO][5888] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Apr 21 10:18:26.147191 containerd[1580]: 2026-04-21 10:18:26.104 [INFO][5888] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Apr 21 10:18:26.147191 containerd[1580]: 2026-04-21 10:18:26.132 [INFO][5895] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" HandleID="k8s-pod-network.f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:18:26.147191 containerd[1580]: 2026-04-21 10:18:26.132 [INFO][5895] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:26.147191 containerd[1580]: 2026-04-21 10:18:26.132 [INFO][5895] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:26.147191 containerd[1580]: 2026-04-21 10:18:26.139 [WARNING][5895] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" HandleID="k8s-pod-network.f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:18:26.147191 containerd[1580]: 2026-04-21 10:18:26.139 [INFO][5895] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" HandleID="k8s-pod-network.f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:18:26.147191 containerd[1580]: 2026-04-21 10:18:26.142 [INFO][5895] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:26.147191 containerd[1580]: 2026-04-21 10:18:26.144 [INFO][5888] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Apr 21 10:18:26.147191 containerd[1580]: time="2026-04-21T10:18:26.147071491Z" level=info msg="TearDown network for sandbox \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\" successfully" Apr 21 10:18:26.147191 containerd[1580]: time="2026-04-21T10:18:26.147094191Z" level=info msg="StopPodSandbox for \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\" returns successfully" Apr 21 10:18:26.147686 containerd[1580]: time="2026-04-21T10:18:26.147653911Z" level=info msg="RemovePodSandbox for \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\"" Apr 21 10:18:26.147686 containerd[1580]: time="2026-04-21T10:18:26.147691521Z" level=info msg="Forcibly stopping sandbox \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\"" Apr 21 10:18:26.212008 containerd[1580]: 2026-04-21 10:18:26.179 [WARNING][5909] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c7979636-3496-4985-b95e-0a670546c031", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"eadd11c28793dafbb08b6db335fca6b709775869c83f60356e7f6739e9996aba", Pod:"coredns-674b8bbfcf-jqrvf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.98.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali24584b69f41", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:26.212008 containerd[1580]: 2026-04-21 10:18:26.179 [INFO][5909] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Apr 21 10:18:26.212008 containerd[1580]: 2026-04-21 10:18:26.179 [INFO][5909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" iface="eth0" netns="" Apr 21 10:18:26.212008 containerd[1580]: 2026-04-21 10:18:26.179 [INFO][5909] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Apr 21 10:18:26.212008 containerd[1580]: 2026-04-21 10:18:26.179 [INFO][5909] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Apr 21 10:18:26.212008 containerd[1580]: 2026-04-21 10:18:26.199 [INFO][5916] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" HandleID="k8s-pod-network.f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:18:26.212008 containerd[1580]: 2026-04-21 10:18:26.200 [INFO][5916] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:26.212008 containerd[1580]: 2026-04-21 10:18:26.200 [INFO][5916] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:26.212008 containerd[1580]: 2026-04-21 10:18:26.205 [WARNING][5916] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" HandleID="k8s-pod-network.f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:18:26.212008 containerd[1580]: 2026-04-21 10:18:26.205 [INFO][5916] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" HandleID="k8s-pod-network.f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Workload="172--236--109--217-k8s-coredns--674b8bbfcf--jqrvf-eth0" Apr 21 10:18:26.212008 containerd[1580]: 2026-04-21 10:18:26.207 [INFO][5916] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:26.212008 containerd[1580]: 2026-04-21 10:18:26.209 [INFO][5909] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd" Apr 21 10:18:26.212387 containerd[1580]: time="2026-04-21T10:18:26.212068727Z" level=info msg="TearDown network for sandbox \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\" successfully" Apr 21 10:18:26.226164 containerd[1580]: time="2026-04-21T10:18:26.225671073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:18:26.226164 containerd[1580]: time="2026-04-21T10:18:26.225729883Z" level=info msg="RemovePodSandbox \"f39f97d56bc88be5f37b2613257f13e0cb30e3ccea32fe7cc35d205deec545bd\" returns successfully" Apr 21 10:18:26.226404 containerd[1580]: time="2026-04-21T10:18:26.226378584Z" level=info msg="StopPodSandbox for \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\"" Apr 21 10:18:26.302422 containerd[1580]: 2026-04-21 10:18:26.261 [WARNING][5931] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0", GenerateName:"calico-kube-controllers-68558db9f8-", Namespace:"calico-system", SelfLink:"", UID:"a1f95f96-3533-4369-810e-aac21a6a983c", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68558db9f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f", Pod:"calico-kube-controllers-68558db9f8-nj78r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia599fba41ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:26.302422 containerd[1580]: 2026-04-21 10:18:26.262 [INFO][5931] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Apr 21 10:18:26.302422 containerd[1580]: 2026-04-21 10:18:26.262 [INFO][5931] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" iface="eth0" netns="" Apr 21 10:18:26.302422 containerd[1580]: 2026-04-21 10:18:26.262 [INFO][5931] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Apr 21 10:18:26.302422 containerd[1580]: 2026-04-21 10:18:26.262 [INFO][5931] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Apr 21 10:18:26.302422 containerd[1580]: 2026-04-21 10:18:26.288 [INFO][5939] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" HandleID="k8s-pod-network.c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Workload="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:18:26.302422 containerd[1580]: 2026-04-21 10:18:26.288 [INFO][5939] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:26.302422 containerd[1580]: 2026-04-21 10:18:26.288 [INFO][5939] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:26.302422 containerd[1580]: 2026-04-21 10:18:26.294 [WARNING][5939] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" HandleID="k8s-pod-network.c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Workload="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:18:26.302422 containerd[1580]: 2026-04-21 10:18:26.294 [INFO][5939] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" HandleID="k8s-pod-network.c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Workload="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:18:26.302422 containerd[1580]: 2026-04-21 10:18:26.296 [INFO][5939] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:26.302422 containerd[1580]: 2026-04-21 10:18:26.299 [INFO][5931] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Apr 21 10:18:26.302422 containerd[1580]: time="2026-04-21T10:18:26.302227353Z" level=info msg="TearDown network for sandbox \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\" successfully" Apr 21 10:18:26.302422 containerd[1580]: time="2026-04-21T10:18:26.302262433Z" level=info msg="StopPodSandbox for \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\" returns successfully" Apr 21 10:18:26.303855 containerd[1580]: time="2026-04-21T10:18:26.303431175Z" level=info msg="RemovePodSandbox for \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\"" Apr 21 10:18:26.303855 containerd[1580]: time="2026-04-21T10:18:26.303472475Z" level=info msg="Forcibly stopping sandbox \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\"" Apr 21 10:18:26.376337 containerd[1580]: 2026-04-21 10:18:26.339 [WARNING][5953] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0", GenerateName:"calico-kube-controllers-68558db9f8-", Namespace:"calico-system", SelfLink:"", UID:"a1f95f96-3533-4369-810e-aac21a6a983c", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68558db9f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"c3d529791df7001f575390fa83ddd9c97272ff226b90fe30b8d8ab9fdfd1136f", Pod:"calico-kube-controllers-68558db9f8-nj78r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.98.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia599fba41ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:26.376337 containerd[1580]: 2026-04-21 10:18:26.340 [INFO][5953] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Apr 21 10:18:26.376337 containerd[1580]: 2026-04-21 10:18:26.340 [INFO][5953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" iface="eth0" netns="" Apr 21 10:18:26.376337 containerd[1580]: 2026-04-21 10:18:26.340 [INFO][5953] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Apr 21 10:18:26.376337 containerd[1580]: 2026-04-21 10:18:26.340 [INFO][5953] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Apr 21 10:18:26.376337 containerd[1580]: 2026-04-21 10:18:26.363 [INFO][5960] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" HandleID="k8s-pod-network.c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Workload="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:18:26.376337 containerd[1580]: 2026-04-21 10:18:26.364 [INFO][5960] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:26.376337 containerd[1580]: 2026-04-21 10:18:26.364 [INFO][5960] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:26.376337 containerd[1580]: 2026-04-21 10:18:26.369 [WARNING][5960] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" HandleID="k8s-pod-network.c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Workload="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:18:26.376337 containerd[1580]: 2026-04-21 10:18:26.370 [INFO][5960] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" HandleID="k8s-pod-network.c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Workload="172--236--109--217-k8s-calico--kube--controllers--68558db9f8--nj78r-eth0" Apr 21 10:18:26.376337 containerd[1580]: 2026-04-21 10:18:26.371 [INFO][5960] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:26.376337 containerd[1580]: 2026-04-21 10:18:26.374 [INFO][5953] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06" Apr 21 10:18:26.376882 containerd[1580]: time="2026-04-21T10:18:26.376379061Z" level=info msg="TearDown network for sandbox \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\" successfully" Apr 21 10:18:26.379687 containerd[1580]: time="2026-04-21T10:18:26.379601124Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:18:26.379687 containerd[1580]: time="2026-04-21T10:18:26.379676774Z" level=info msg="RemovePodSandbox \"c077f138df1f36a37ea4536eb49ff225cfb45fb64ad5bb14c77d35eec4023c06\" returns successfully" Apr 21 10:18:26.380348 containerd[1580]: time="2026-04-21T10:18:26.380088215Z" level=info msg="StopPodSandbox for \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\"" Apr 21 10:18:26.442376 containerd[1580]: 2026-04-21 10:18:26.410 [WARNING][5974] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0", GenerateName:"calico-apiserver-77558dd99f-", Namespace:"calico-system", SelfLink:"", UID:"cf487f18-8688-4b2b-baea-a5fd2415ecd5", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77558dd99f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e", Pod:"calico-apiserver-77558dd99f-hb6xz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0129b023a06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:26.442376 containerd[1580]: 2026-04-21 10:18:26.411 [INFO][5974] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Apr 21 10:18:26.442376 containerd[1580]: 2026-04-21 10:18:26.411 [INFO][5974] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" iface="eth0" netns="" Apr 21 10:18:26.442376 containerd[1580]: 2026-04-21 10:18:26.411 [INFO][5974] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Apr 21 10:18:26.442376 containerd[1580]: 2026-04-21 10:18:26.411 [INFO][5974] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Apr 21 10:18:26.442376 containerd[1580]: 2026-04-21 10:18:26.429 [INFO][5981] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" HandleID="k8s-pod-network.fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:18:26.442376 containerd[1580]: 2026-04-21 10:18:26.430 [INFO][5981] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:26.442376 containerd[1580]: 2026-04-21 10:18:26.430 [INFO][5981] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:26.442376 containerd[1580]: 2026-04-21 10:18:26.435 [WARNING][5981] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" HandleID="k8s-pod-network.fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:18:26.442376 containerd[1580]: 2026-04-21 10:18:26.435 [INFO][5981] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" HandleID="k8s-pod-network.fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:18:26.442376 containerd[1580]: 2026-04-21 10:18:26.437 [INFO][5981] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:26.442376 containerd[1580]: 2026-04-21 10:18:26.439 [INFO][5974] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Apr 21 10:18:26.443815 containerd[1580]: time="2026-04-21T10:18:26.442410748Z" level=info msg="TearDown network for sandbox \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\" successfully" Apr 21 10:18:26.443815 containerd[1580]: time="2026-04-21T10:18:26.442434248Z" level=info msg="StopPodSandbox for \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\" returns successfully" Apr 21 10:18:26.443815 containerd[1580]: time="2026-04-21T10:18:26.442930948Z" level=info msg="RemovePodSandbox for \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\"" Apr 21 10:18:26.443815 containerd[1580]: time="2026-04-21T10:18:26.442955499Z" level=info msg="Forcibly stopping sandbox \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\"" Apr 21 10:18:26.518357 containerd[1580]: 2026-04-21 10:18:26.481 [WARNING][5995] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0", GenerateName:"calico-apiserver-77558dd99f-", Namespace:"calico-system", SelfLink:"", UID:"cf487f18-8688-4b2b-baea-a5fd2415ecd5", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77558dd99f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"eb4020fa36ee9980766cd211418a351eff1eeb368718ccfbd36ce903805eb12e", Pod:"calico-apiserver-77558dd99f-hb6xz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.98.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0129b023a06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:26.518357 containerd[1580]: 2026-04-21 10:18:26.481 [INFO][5995] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Apr 21 10:18:26.518357 containerd[1580]: 2026-04-21 10:18:26.481 [INFO][5995] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" iface="eth0" netns="" Apr 21 10:18:26.518357 containerd[1580]: 2026-04-21 10:18:26.482 [INFO][5995] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Apr 21 10:18:26.518357 containerd[1580]: 2026-04-21 10:18:26.482 [INFO][5995] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Apr 21 10:18:26.518357 containerd[1580]: 2026-04-21 10:18:26.505 [INFO][6002] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" HandleID="k8s-pod-network.fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:18:26.518357 containerd[1580]: 2026-04-21 10:18:26.505 [INFO][6002] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:26.518357 containerd[1580]: 2026-04-21 10:18:26.505 [INFO][6002] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:26.518357 containerd[1580]: 2026-04-21 10:18:26.511 [WARNING][6002] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" HandleID="k8s-pod-network.fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:18:26.518357 containerd[1580]: 2026-04-21 10:18:26.511 [INFO][6002] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" HandleID="k8s-pod-network.fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Workload="172--236--109--217-k8s-calico--apiserver--77558dd99f--hb6xz-eth0" Apr 21 10:18:26.518357 containerd[1580]: 2026-04-21 10:18:26.513 [INFO][6002] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:26.518357 containerd[1580]: 2026-04-21 10:18:26.515 [INFO][5995] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51" Apr 21 10:18:26.518357 containerd[1580]: time="2026-04-21T10:18:26.518327548Z" level=info msg="TearDown network for sandbox \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\" successfully" Apr 21 10:18:26.522407 containerd[1580]: time="2026-04-21T10:18:26.522358653Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:18:26.522458 containerd[1580]: time="2026-04-21T10:18:26.522407843Z" level=info msg="RemovePodSandbox \"fe360901ae88ec8ff0deabef3866608a5a981f43d2dd6bfd9f09e2e0bc013a51\" returns successfully" Apr 21 10:18:26.523494 containerd[1580]: time="2026-04-21T10:18:26.523170934Z" level=info msg="StopPodSandbox for \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\"" Apr 21 10:18:26.593938 containerd[1580]: 2026-04-21 10:18:26.559 [WARNING][6017] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-csi--node--driver--zjz5l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"768b5922-7716-4a2f-ad9a-14196f3f0888", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61", Pod:"csi-node-driver-zjz5l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali54e2a32b412", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:26.593938 containerd[1580]: 2026-04-21 10:18:26.560 [INFO][6017] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Apr 21 10:18:26.593938 containerd[1580]: 2026-04-21 10:18:26.560 [INFO][6017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" iface="eth0" netns="" Apr 21 10:18:26.593938 containerd[1580]: 2026-04-21 10:18:26.560 [INFO][6017] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Apr 21 10:18:26.593938 containerd[1580]: 2026-04-21 10:18:26.560 [INFO][6017] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Apr 21 10:18:26.593938 containerd[1580]: 2026-04-21 10:18:26.580 [INFO][6024] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" HandleID="k8s-pod-network.07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Workload="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:18:26.593938 containerd[1580]: 2026-04-21 10:18:26.580 [INFO][6024] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:26.593938 containerd[1580]: 2026-04-21 10:18:26.580 [INFO][6024] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:26.593938 containerd[1580]: 2026-04-21 10:18:26.587 [WARNING][6024] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" HandleID="k8s-pod-network.07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Workload="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:18:26.593938 containerd[1580]: 2026-04-21 10:18:26.587 [INFO][6024] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" HandleID="k8s-pod-network.07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Workload="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:18:26.593938 containerd[1580]: 2026-04-21 10:18:26.589 [INFO][6024] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:26.593938 containerd[1580]: 2026-04-21 10:18:26.591 [INFO][6017] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Apr 21 10:18:26.594350 containerd[1580]: time="2026-04-21T10:18:26.593952206Z" level=info msg="TearDown network for sandbox \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\" successfully" Apr 21 10:18:26.594350 containerd[1580]: time="2026-04-21T10:18:26.593976976Z" level=info msg="StopPodSandbox for \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\" returns successfully" Apr 21 10:18:26.594444 containerd[1580]: time="2026-04-21T10:18:26.594413177Z" level=info msg="RemovePodSandbox for \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\"" Apr 21 10:18:26.594444 containerd[1580]: time="2026-04-21T10:18:26.594437437Z" level=info msg="Forcibly stopping sandbox \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\"" Apr 21 10:18:26.669963 containerd[1580]: 2026-04-21 10:18:26.630 [WARNING][6038] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--109--217-k8s-csi--node--driver--zjz5l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"768b5922-7716-4a2f-ad9a-14196f3f0888", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-109-217", ContainerID:"848a5404b08aa412a1e4b41e29835a7b5c8c6f47a5a36a3b88dea6cabf2ece61", Pod:"csi-node-driver-zjz5l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.98.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali54e2a32b412", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:26.669963 containerd[1580]: 2026-04-21 10:18:26.630 [INFO][6038] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Apr 21 10:18:26.669963 containerd[1580]: 2026-04-21 10:18:26.630 [INFO][6038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" iface="eth0" netns="" Apr 21 10:18:26.669963 containerd[1580]: 2026-04-21 10:18:26.630 [INFO][6038] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Apr 21 10:18:26.669963 containerd[1580]: 2026-04-21 10:18:26.630 [INFO][6038] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Apr 21 10:18:26.669963 containerd[1580]: 2026-04-21 10:18:26.657 [INFO][6045] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" HandleID="k8s-pod-network.07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Workload="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:18:26.669963 containerd[1580]: 2026-04-21 10:18:26.657 [INFO][6045] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:26.669963 containerd[1580]: 2026-04-21 10:18:26.657 [INFO][6045] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:26.669963 containerd[1580]: 2026-04-21 10:18:26.663 [WARNING][6045] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" HandleID="k8s-pod-network.07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Workload="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:18:26.669963 containerd[1580]: 2026-04-21 10:18:26.663 [INFO][6045] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" HandleID="k8s-pod-network.07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Workload="172--236--109--217-k8s-csi--node--driver--zjz5l-eth0" Apr 21 10:18:26.669963 containerd[1580]: 2026-04-21 10:18:26.665 [INFO][6045] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:26.669963 containerd[1580]: 2026-04-21 10:18:26.667 [INFO][6038] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58" Apr 21 10:18:26.670411 containerd[1580]: time="2026-04-21T10:18:26.670001107Z" level=info msg="TearDown network for sandbox \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\" successfully" Apr 21 10:18:26.673775 containerd[1580]: time="2026-04-21T10:18:26.673628831Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:18:26.673775 containerd[1580]: time="2026-04-21T10:18:26.673686781Z" level=info msg="RemovePodSandbox \"07260f8c52cac6b3386380172e885212ebaa3108dede4b84f9159be21ac95a58\" returns successfully" Apr 21 10:18:38.443783 kubelet[2738]: E0421 10:18:38.443734 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:18:51.446733 kubelet[2738]: E0421 10:18:51.446690 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:18:53.444408 kubelet[2738]: E0421 10:18:53.443883 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:18:55.684846 systemd[1]: run-containerd-runc-k8s.io-eb9c7bbec6c73d52548a92a04d0447bb96225350e84abba1152e08029d33938e-runc.vvwhN0.mount: Deactivated successfully. Apr 21 10:19:03.444602 kubelet[2738]: E0421 10:19:03.444323 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:19:06.443389 kubelet[2738]: E0421 10:19:06.443086 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:19:11.444478 kubelet[2738]: E0421 10:19:11.443654 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:19:13.445054 kubelet[2738]: E0421 10:19:13.443941 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:19:22.209769 systemd[1]: Started sshd@7-172.236.109.217:22-205.210.31.192:65020.service - OpenSSH per-connection server daemon (205.210.31.192:65020). Apr 21 10:19:22.417259 systemd[1]: run-containerd-runc-k8s.io-a29bcba08a48d979cdd0971c14a75d27fccff9ef5026ceebcfb03f15cfbfff0a-runc.veGcxV.mount: Deactivated successfully. Apr 21 10:19:27.394811 sshd[6204]: Connection reset by 205.210.31.192 port 65020 [preauth] Apr 21 10:19:27.397019 systemd[1]: sshd@7-172.236.109.217:22-205.210.31.192:65020.service: Deactivated successfully. Apr 21 10:19:32.758506 systemd[1]: run-containerd-runc-k8s.io-a33de61d9c1eb1f42f5589700fec279af867ffd3c1b43735f83c56caa4156e69-runc.irXAp8.mount: Deactivated successfully. Apr 21 10:19:33.684772 systemd[1]: Started sshd@8-172.236.109.217:22-50.85.169.122:43056.service - OpenSSH per-connection server daemon (50.85.169.122:43056). Apr 21 10:19:34.310896 sshd[6312]: Accepted publickey for core from 50.85.169.122 port 43056 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:19:34.313940 sshd[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:34.321226 systemd-logind[1562]: New session 8 of user core. Apr 21 10:19:34.334833 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:19:34.836178 sshd[6312]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:34.844662 systemd[1]: sshd@8-172.236.109.217:22-50.85.169.122:43056.service: Deactivated successfully. Apr 21 10:19:34.851462 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:19:34.852804 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:19:34.854341 systemd-logind[1562]: Removed session 8. Apr 21 10:19:39.939081 systemd[1]: Started sshd@9-172.236.109.217:22-50.85.169.122:42492.service - OpenSSH per-connection server daemon (50.85.169.122:42492). Apr 21 10:19:40.536587 sshd[6327]: Accepted publickey for core from 50.85.169.122 port 42492 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:19:40.537751 sshd[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:40.543058 systemd-logind[1562]: New session 9 of user core. Apr 21 10:19:40.546948 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:19:41.026778 sshd[6327]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:41.030207 systemd[1]: sshd@9-172.236.109.217:22-50.85.169.122:42492.service: Deactivated successfully. Apr 21 10:19:41.035516 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:19:41.037115 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:19:41.038698 systemd-logind[1562]: Removed session 9. Apr 21 10:19:46.131748 systemd[1]: Started sshd@10-172.236.109.217:22-50.85.169.122:42496.service - OpenSSH per-connection server daemon (50.85.169.122:42496). Apr 21 10:19:46.724390 sshd[6391]: Accepted publickey for core from 50.85.169.122 port 42496 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:19:46.726104 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:46.731630 systemd-logind[1562]: New session 10 of user core. Apr 21 10:19:46.734960 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:19:47.225435 sshd[6391]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:47.228916 systemd[1]: sshd@10-172.236.109.217:22-50.85.169.122:42496.service: Deactivated successfully. Apr 21 10:19:47.237676 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:19:47.238246 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:19:47.239999 systemd-logind[1562]: Removed session 10. Apr 21 10:19:47.332769 systemd[1]: Started sshd@11-172.236.109.217:22-50.85.169.122:42506.service - OpenSSH per-connection server daemon (50.85.169.122:42506). Apr 21 10:19:47.927140 sshd[6407]: Accepted publickey for core from 50.85.169.122 port 42506 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:19:47.928817 sshd[6407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:47.934858 systemd-logind[1562]: New session 11 of user core. Apr 21 10:19:47.939904 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:19:48.465221 sshd[6407]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:48.468812 systemd[1]: sshd@11-172.236.109.217:22-50.85.169.122:42506.service: Deactivated successfully. Apr 21 10:19:48.473214 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:19:48.474942 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:19:48.475856 systemd-logind[1562]: Removed session 11. Apr 21 10:19:48.570499 systemd[1]: Started sshd@12-172.236.109.217:22-50.85.169.122:42522.service - OpenSSH per-connection server daemon (50.85.169.122:42522). Apr 21 10:19:49.202722 sshd[6419]: Accepted publickey for core from 50.85.169.122 port 42522 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:19:49.205229 sshd[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:49.210479 systemd-logind[1562]: New session 12 of user core. Apr 21 10:19:49.214956 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:19:49.444092 kubelet[2738]: E0421 10:19:49.443376 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:19:49.706324 sshd[6419]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:49.710713 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:19:49.711478 systemd[1]: sshd@12-172.236.109.217:22-50.85.169.122:42522.service: Deactivated successfully. Apr 21 10:19:49.716319 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:19:49.717837 systemd-logind[1562]: Removed session 12. Apr 21 10:19:54.809085 systemd[1]: Started sshd@13-172.236.109.217:22-50.85.169.122:58664.service - OpenSSH per-connection server daemon (50.85.169.122:58664). Apr 21 10:19:55.403577 sshd[6451]: Accepted publickey for core from 50.85.169.122 port 58664 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:19:55.405234 sshd[6451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:55.410253 systemd-logind[1562]: New session 13 of user core. Apr 21 10:19:55.413872 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:19:55.686016 systemd[1]: run-containerd-runc-k8s.io-eb9c7bbec6c73d52548a92a04d0447bb96225350e84abba1152e08029d33938e-runc.1Vl7WG.mount: Deactivated successfully. Apr 21 10:19:55.897619 sshd[6451]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:55.902993 systemd[1]: sshd@13-172.236.109.217:22-50.85.169.122:58664.service: Deactivated successfully. Apr 21 10:19:55.909043 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:19:55.910238 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:19:55.911896 systemd-logind[1562]: Removed session 13. Apr 21 10:19:56.004023 systemd[1]: Started sshd@14-172.236.109.217:22-50.85.169.122:58680.service - OpenSSH per-connection server daemon (50.85.169.122:58680). Apr 21 10:19:56.444063 kubelet[2738]: E0421 10:19:56.443698 2738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Apr 21 10:19:56.626864 sshd[6486]: Accepted publickey for core from 50.85.169.122 port 58680 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:19:56.627841 sshd[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:56.633524 systemd-logind[1562]: New session 14 of user core. Apr 21 10:19:56.635953 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:19:57.291308 sshd[6486]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:57.296881 systemd[1]: sshd@14-172.236.109.217:22-50.85.169.122:58680.service: Deactivated successfully. Apr 21 10:19:57.302520 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:19:57.302644 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:19:57.305009 systemd-logind[1562]: Removed session 14. Apr 21 10:19:57.398031 systemd[1]: Started sshd@15-172.236.109.217:22-50.85.169.122:58682.service - OpenSSH per-connection server daemon (50.85.169.122:58682). Apr 21 10:19:58.024227 sshd[6499]: Accepted publickey for core from 50.85.169.122 port 58682 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:19:58.026144 sshd[6499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:58.031792 systemd-logind[1562]: New session 15 of user core. Apr 21 10:19:58.034919 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:19:59.075800 sshd[6499]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:59.080022 systemd[1]: sshd@15-172.236.109.217:22-50.85.169.122:58682.service: Deactivated successfully. Apr 21 10:19:59.088806 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:19:59.089524 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:19:59.091028 systemd-logind[1562]: Removed session 15. Apr 21 10:19:59.178985 systemd[1]: Started sshd@16-172.236.109.217:22-50.85.169.122:58694.service - OpenSSH per-connection server daemon (50.85.169.122:58694). Apr 21 10:19:59.776169 sshd[6527]: Accepted publickey for core from 50.85.169.122 port 58694 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:19:59.778141 sshd[6527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:59.785809 systemd-logind[1562]: New session 16 of user core. Apr 21 10:19:59.789827 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:20:00.404416 sshd[6527]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:00.413212 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:20:00.415425 systemd[1]: sshd@16-172.236.109.217:22-50.85.169.122:58694.service: Deactivated successfully. Apr 21 10:20:00.419794 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:20:00.421866 systemd-logind[1562]: Removed session 16. Apr 21 10:20:00.508848 systemd[1]: Started sshd@17-172.236.109.217:22-50.85.169.122:48918.service - OpenSSH per-connection server daemon (50.85.169.122:48918). Apr 21 10:20:01.106302 sshd[6540]: Accepted publickey for core from 50.85.169.122 port 48918 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:20:01.108140 sshd[6540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:01.113711 systemd-logind[1562]: New session 17 of user core. Apr 21 10:20:01.119990 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:20:01.467065 systemd[1]: run-containerd-runc-k8s.io-a33de61d9c1eb1f42f5589700fec279af867ffd3c1b43735f83c56caa4156e69-runc.FpnjCa.mount: Deactivated successfully. Apr 21 10:20:01.644607 sshd[6540]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:01.649074 systemd[1]: sshd@17-172.236.109.217:22-50.85.169.122:48918.service: Deactivated successfully. Apr 21 10:20:01.656809 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:20:01.657904 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:20:01.659087 systemd-logind[1562]: Removed session 17. Apr 21 10:20:02.776964 systemd[1]: run-containerd-runc-k8s.io-a33de61d9c1eb1f42f5589700fec279af867ffd3c1b43735f83c56caa4156e69-runc.B03ikK.mount: Deactivated successfully. Apr 21 10:20:06.747045 systemd[1]: Started sshd@18-172.236.109.217:22-50.85.169.122:48930.service - OpenSSH per-connection server daemon (50.85.169.122:48930). Apr 21 10:20:07.349800 sshd[6596]: Accepted publickey for core from 50.85.169.122 port 48930 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:20:07.351899 sshd[6596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:07.356587 systemd-logind[1562]: New session 18 of user core. Apr 21 10:20:07.360857 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:20:07.842646 sshd[6596]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:07.847936 systemd[1]: sshd@18-172.236.109.217:22-50.85.169.122:48930.service: Deactivated successfully. Apr 21 10:20:07.854374 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:20:07.855397 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:20:07.857311 systemd-logind[1562]: Removed session 18. Apr 21 10:20:12.949963 systemd[1]: Started sshd@19-172.236.109.217:22-50.85.169.122:39062.service - OpenSSH per-connection server daemon (50.85.169.122:39062). Apr 21 10:20:13.576404 sshd[6610]: Accepted publickey for core from 50.85.169.122 port 39062 ssh2: RSA SHA256:deeUednTxxs5PXnjLfey+HxkUnmR0DAEfcCpy+5NAjw Apr 21 10:20:13.578083 sshd[6610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:20:13.583477 systemd-logind[1562]: New session 19 of user core. Apr 21 10:20:13.586808 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:20:14.087880 sshd[6610]: pam_unix(sshd:session): session closed for user core Apr 21 10:20:14.091836 systemd[1]: sshd@19-172.236.109.217:22-50.85.169.122:39062.service: Deactivated successfully. Apr 21 10:20:14.098092 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:20:14.100025 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:20:14.102184 systemd-logind[1562]: Removed session 19.